text
string
source
string
often requiring external proxy data. Table 8 presents the results for Backdoor Attack namely two prevalent variants: Bac and Sem Bac. Additionally, we assess the robustness of two prominent Backdoor Defense techniques, namely RLR [136] and CRFL [22], having effective defense capabilities against backdoor threats. Evaluation Metrics for fairness. As described in § A.2.1, Contribution Match Degree ( E) and Performance Deviation ( V) are metrics specifically designed to assess Performance Fairness.. Results: As shown in Table 5 and Table 6, few of the existing federated optimization takes the Collaboration Fairness into federated objective account. Besides, fairness is also largely impeded under large local data distribution diversity, such as the Domain Skew. Regarding the Performance Fairness, existing methods focus on minimizing the weighted empirical loss and thus bring the 8 Table 8: Quantitative Backdoor Attack results in term of AuandRuon Cifar-10, MNIST, and USPS. The local optimization algorithm is FedAvg [ 118]. We consider two types of backdoor attacks and abbreviate them as Bac [ 193] and Sem Bac [ 36]. - means that these solutions are not applicable to these evaluations. Refer to § 5.2 for Backdoor Defense discussion. Cifar-10 MNIST USPS 0.5 0.3 0.5 0.3 0.5 0.3 Bac Sem Bac Bac Sem Bac Bac Sem Bac Bac Sem Bac Bac Sem Bac Bac Sem BacMethods AuRuAuRuAuRuAuRuAuRuAuRuAuRuAuRuAuRuAuRuAuRuAuRu Focus on Byzantine Tolerance § 3 Bulyan [153] 47.61 28.73 44.61 17.12 - - 11.12 19.56 96.95 14.77 92.13 0.45 87.70 11.13 87.86 0.10 93.32 10.95 93.52 11.32 87.79 10.83 85.14 1.56 Trim Median [35] 51.34 22.49 52.21 13.70 - - 14.78 51.66 98.07 99.18 98.44 0.16 96.65 89.42 96.72 0.61 94.62 71.52 94.24 4.82 92.05 84.17 94.77 2.40 FoolsGold [42] 60.69 62.54 60.50 13.06 58.58 56.85 59.84 12.56 82.20 91.61 98.45 0.59 92.88 98.06 97.00 1.52 89.66 90.24 83.21 10.11 76.56 86.14 94.77 2.40 DnC [162] 59.30 23.07 61.40 12.88 60.03 42.79 59.80 9.76 99.26 10.39 99.13 0.20 98.53 10.46 98.79 0.29 95.75 9.62 95.11 2.89 96.14 16.89 94.86 1.81 FLTrust [192] / / / / / / / / 95.31 8.71 97.84 0.00 92.55 10.03 97.43 0.30 71.67 17.69 59.83 20.96 63.20 5.29 63.20 5.29 Sageflow [70] / / / / / / / / 99.17 98.70 99.21 0.53 99.03 98.05 98.83 1.27 96.07 73.63 96.20 3.61 96.83 86.39 96.02 2.65 RFA [85] 64.90 74.31 63.90 11.54 60.36 75.57 62.75 14.76 99.09 99.09 99.12 0.32 99.11 98.88 98.84 0.39 95.89 2.28 95.75 3.13 97.04 39.59 95.89 2.28 Focus on Backdoor Defense RLR [136] 51.65 28.83 50.37 10.60 - - 44.80 20.74 94.77 10.54 93.11 0.40 91.11 22.69 92.94 0.35 89.20 10.78 92.00 12.65 87.00 10.27 82.15 1.44 CRFL [22] 59.27 63.29 58.59 9.52 52.27 59.50 52.62 11.66 98.93 33.86 98.89 0.43 98.44 26.28 98.08 0.91 94.96 49.77 95.31 3.61 95.38 62.98 94.36 1.32 imbalanced performance. Notably, global network utilization and server adaptive optimization seem to alleviate the imbalanced performance on the multiple domains roundly. 6 Future Outlook (1) Summary of Experimental Observations. Our evaluation surfaces key trends and gaps across federated learning methods: •Reasoning Benchmark Void. Despite its importance, reasoning in FL
https://arxiv.org/abs/2505.16850v1
remains underexplored due to the lack of dedicated benchmarks, reliable metrics, and structured eval- uation settings. Most existing methods focus on performance, with limited support for interpretability, traceability, or privacy-aware inference. We address this gap conceptually in Section 4 and encourage future empirical studies. •Reproducibility Dilemma. Many FL studies lack transparent experimental setups and open-source code. The inconsistency in datasets and models complicates fair comparisons, undermining reproducibility. •Computational Efficiency Gap. Despite strong accuracy claims, most methods overlook memory and runtime overheads. In real-world deployments, especially cross-device [ 51] and cross-silo [ 217,210] settings, efficiency is often a limiting factor. •Fragmented Solutions. FL research often targets isolated issues like heterogeneity [ 201], robustness [ 72], or fairness [219], lacking unified solutions that balance performance, trust, and efficiency. (2) Open Issues and Future Opportunities. •Building a Reasoning Benchmark. Our work outlines a preliminary roadmap for benchmarking reasoning in FL. Future efforts should define evaluation criteria for trace coherence, faithfulness, and privacy-preserving reasoning across decentralized clients. •Towards Reproducibility. ATR-Bench introduces a unified taxonomy, standard protocols, and open-source assets to enhance comparability. Future work should prioritize consistent baselines and transparent reporting practices. •Advancing Efficiency. While optimizations like quantization, pruning, and homomorphic encryption [ 160] have emerged, trade-offs remain. Future FL systems must balance speed, scalability, and security to support edge-centric applications. •Toward Holistic Evaluation. We advocate for comprehensive benchmarks that jointly assess generalization, robustness, fairness, reasoning, and efficiency across diverse modalities including video and multimodal settings to close the gap between research and deployment. 7 Conclusion We introduce ATR-Bench , the first comprehensive benchmark that systematically evaluates federated learning (FL) across three foundational dimensions: Adaptation ,Trust , and Reasoning . Our work benchmarks various FL methods, categorizing them by task settings, learning strategies, and contri- butions. In addition, we conducted a unified empirical evaluation in eight widely used FL datasets to uncover key insights and limitations in existing approaches. ATR-Bench highlights persistent challenges in the field and establishes a foundation for developing robust, trustworthy, and adaptive FL systems, paving the way for future research and practical deployment. 9 Limitations. Benchmarking reasoning in LLMs remains a bottleneck, especially in federated settings where reasoning traces vary across clients; ATR-Bench highlights this gap and motivates the need for unified metrics to assess coherence, faithfulness, and adaptability of distributed reasoning. References [1]D.A.E. Acar, Y .Zhao, R.Matas, M.Mattina, P.Whatmough, and V .Saligrama. Federated learning based on dynamic regularization. In ICLR , 2021. [2]A.Madry, A.Makelov, L.Schmidt, D.Tsipras, and A.Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 , 2017. [3]A.Nguyen, T.Do, M.Tran, B.X. Nguyen, C.Duong, T.Phan, E.Tjiputra, and Q.D. Tran. Deep federated learning for autonomous driving. In IEEE IV , pages 1824–1830, 2022. [4]B.Han, Q.Yao, X.Yu, G.Niu, M.Xu, W.Hu, I.Tsang, and M.Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In NeurIPS , volume 31, 2018. [5]J.M. Bilbao. Cooperative games on combinatorial structures , volume 26. Springer Science & Business Media, 2012. [6]B.Lakshminarayanan, A.Pritzel, and C.Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS , 2017. [7]B.Lakshminarayanan, A.Pritzel, and C.Blundell. Simple and scalable
https://arxiv.org/abs/2505.16850v1
predictive uncertainty estimation using deep ensembles. In NeurIPS , volume 30, 2017. [8]P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer. Machine learning with adversaries: Byzantine tolerant gradient descent. In NeurIPS , 2017. [9]B.VanRooyen, A.Menon, and R.C. Williamson. Learning with symmetric label noise: The importance of being unhinged. In NeurIPS , volume 28, 2015. [10] C.Guo, G.Pleiss, Y .Sun, and K.Q. Weinberger. On calibration of modern neural networks. In ICML , pages 1321–1330, 2017. [11] H.-Y . Chen and W.-L. Chao. Fedbe: Making bayesian model ensemble applicable to federated learning. InICLR , 2021. [12] J. Chen, M. Jiang, Q. Dou, and Q. Chen. Federated domain generalization for image recognition via cross-client style transfer. In WACV , pages 361–370, 2023. [13] S. Chen, L. Zhang, and L. Zhang. Msdformer: Multiscale deformable transformer for hyperspectral image super-resolution. IEEE TGRS , 2023. [14] T. Chen, C. Gong, D.J. Diaz, X. Chen, J.T. Wells, Qiang Liu, Z. Wang, A. Ellington, A. Dimakis, and A. Klivans. Hotprotein: A novel framework for protein thermostability prediction and editing. In ICLR , 2023. [15] Zhongwu Chen, Chengjin Xu, Fenglong Su, Zhen Huang, and Yong Dou. Meta-learning based knowledge extrapolation for temporal knowledge graph. In Proceedings of the ACM Web Conference 2023 , WWW ’23, page 2433–2443, New York, NY , USA, 2023. Association for Computing Machinery. [16] Zhuoxin Chen, Zhenyu Wu, and Yang Ji. Decoupled federated learning on long-tailed and non-iid data with feature statistics. In 2024 IEEE International Conference on Multimedia and Expo (ICME) , pages 1–6, 2024. [17] C.Liao, H.Zhong, A.Squicciarini, S.Zhu, and D.Miller. Backdoor embedding in convolutional neural network models via invisible perturbation. arXiv preprint arXiv:1808.10307 , 2018. [18] S. Cui, W. Pan, J. Liang, C. Zhang, and F. Wang. Addressing algorithmic disparity and performance inconsistency in federated learning. In NeurIPS , pages 26 091–26 102, 2021. [19] C.Wu, F.Wu, L.Lyu, T.Qi, Y .Huang, and X.Xie. A federated graph neural network framework for privacy-preserving personalization. Nature Communications , 13(1):3091, 2022. [20] C.Wu, X.Yang, S.Zhu, and P.Mitra. Mitigating backdoor attacks in federated learning. arXiv preprint arXiv:2011.01767, 2020. 10 [21] C.Xie, K.Huang, P.-Y . Chen, and B.Li. Dba: Distributed backdoor attacks against federated learning. In ICLR , 2020. [22] C.Xie, M.Chen, P.-Y . Chen, and B.Li. Crfl: Certifiably robust federated learning against backdoor attacks. InICML , pages 11,372–11,382. PMLR, 2021. [23] C.Xie, O.Koyejo, and I.Gupta. Fall of empires: Breaking byzantine-tolerant sgd by inner product manipulation. In UAI, pages 261–270, 2020. [24] C.Xu, Y .Qu, Y .Xiang, and L.Gao. Asynchronous federated learning on heterogeneous devices: A survey. arXiv preprint arXiv:2109.04269 , 2021. [25] Shenghong Dai, Yicong Chen, Jy yong Sohn, S M Iftekharul Alam, Ravikumar Balakrishnan, Suman Banerjee, Nageen Himayat, and Kangwook Lee. FedGP: Buffer-based gradient projection for continual federated learning, 2024. [26] G. Damaskinos, R. Guerraoui, R. Patra, and M. etal. Taziki. Asynchronous byzantine machine learning (the case of sgd). In ICML , pages 1145–1154, 2018. [27] D.Amodei, C.Olah, J.Steinhardt, P.Christiano, J.Schulman, and D.Mané. Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016. [28] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei.
https://arxiv.org/abs/2505.16850v1
Imagenet: A large-scale hierarchical image database. In CVPR , pages 248–255. IEEE, 2009. [29] C. Dengsheng, J. Hu, V . J. K. Tan, and E. Wu. Elastic aggregation for federated optimization. In CVPR , 2023. [30] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers) , pages 4171–4186, 2019. [31] Oscar Dilley, Juan Marcelo Parra-Ullauri, Rasheed Hussain, and Dimitra Simeonidou. Federated fairness analytics: Quantifying fairness in federated learning, 2024. [32] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, and S. Gelly et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR , 2021. [33] D.Peterson, P.Kanani, and V .J. Marathe. Private federated learning with domain adaptation. In NeurIPS , 2019. [34] J.h. Duan, W.Li, D.Zou, R.Li, and S.Lu. Federated learning with data-agnostic distribution fusion. In CVPR , 2023. [35] D.Yin, Y .Chen, R.Kannan, and P.Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. In ICML , pages 5650–5659, 2018. [36] E.Bagdasaryan, A.Veit, Y .Hua, D.Estrin, and V .Shmatikov. How to backdoor federated learning. In AISTATS , pages 2938–2948, 2020. [37] Y . H. Ezzeldin, S. Yan, C. He, E. Ferrara, and A. S. Avestimehr. Fairfed: Enabling group fairness in federated learning. In AAAI , pages 7494–7502, 2023. [38] Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems , volume 33, pages 3557–3568. Curran Associates, Inc., 2020. [39] Li Fei-Fei, Rob Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Comput. Vis. Image Underst. , 106(1):59–70, April 2007. [40] Zhanbo Feng, Yuanjie Wang, Jie Li, Fan Yang, Jiong Lou, Tiebin Mi, Robert Caiming Qiu, and Zhenyu Liao. Robust and Communication-Efficient Federated Domain Adaptation via Random Features . IEEE Transactions on Knowledge & Data Engineering , 37(03):1411–1424, March 2025. [41] F.Sattler, T.Korjakow, R.Rischke, and W.Samek. Fedaux: Leveraging unlabeled auxiliary data in federated learning. IEEE TNNLS , 2021. 11 [42] C. Fung, C. J. Yoon, and I. Beschastnikh. Mitigating sybils in federated learning poisoning. arXiv preprint arXiv:1808.04866, 2018. [43] Felipe Garrido-Lucero, Benjamin Heymann, Maxime V ono, Patrick Loiseau, and Vianney Perchet. Du- shapley: A shapley value proxy for efficient dataset valuation. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 1973–2000. Curran Associates, Inc., 2024. [44] G.Baruch, M.Baruch, and Y .Goldberg. A little is enough: Circumventing defenses for distributed learning. InNeurIPS , volume 32, 2019. [45] G.Lee, M.Jeong, Y .Shin, S.Bae, and S.-Y . Yun. Preservation of the global knowledge by not-true distillation in federated learning. In NeurIPS , 2022. [46] G.Li, Q.Zhang, P.Wang, J.Zhang, and C.Wu. Federated
https://arxiv.org/abs/2505.16850v1
domain adaptation via pseudo-label refinement. InICME , pages 1829–1834. IEEE, 2023. [47] Xuan Gong, Abhishek Sharma, Srikrishna Karanam, Ziyan Wu, Terrence Chen, David Doermann, and Arun Innanje. Preserving privacy in federated learning with ensemble cross-domain knowledge distillation. Proceedings of the AAAI Conference on Artificial Intelligence , 36(11):11891–11899, Jun. 2022. [48] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y . Bengio. Generative adversarial nets. In NeurIPS , 2014. [49] X. Guo, P. Xing, S. Feng, B. Li, and C. Miao. Federated learning with diversified preference for humor recognition. In IJCAI Workshop , 2020. [50] G.Wu and S.Gong. Collaborative optimization and aggregation for decentralized domain generalization and adaptation. In ICCV , pages 6484–6493, 2021. [51] A. Hard, K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, and D. Ramage. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 , 2018. [52] C. He, M. Annavaram, and S. Avestimehr. Group knowledge transfer: Federated learning of large cnns at the edge. In NeurIPS , pages 14,068–14,080, 2020. [53] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR , pages 770–778, 2016. [54] Charuka Herath, Yogachandran Rahulamathavan, and Xiaolan Liu. Recursive euclidean distance-based robust aggregation technique for federated learning. In 2023 IEEE IAS Global Conference on Emerging Technologies (GlobConET) , pages 1–6, 2023. [55] H.Feng, Z.You, M.Chen, T.Zhang, M.Zhu, F.Wu, C.Wu, and W.Chen. Kd3a: Unsupervised multi-source decentralized domain adaptation via knowledge distillation. In ICML , pages 3274–3283, 2021. [56] Yassine Himeur, Iraklis Varlamis, Hamza Kheddar, Abbes Amira, Shadi Atalla, Yashbir Singh, Faycal Bensaali, and Wathiq Mansoor. Federated learning for computer vision, 2023. [57] S. Hong and J. Chae. Communication-efficient randomized algorithm for multi-kernel online federated learning. IEEE PAMI , 44(12):9872–9886, 2021. [58] Edward J. Hu, Yelong Shen, Phil Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 , 2021. [59] G. Huang, Z. Liu, G. Pleiss, L. Van Der Maaten, and K. Weinberger. Convolutional networks with dense connectivity. IEEE PAMI , 2019. [60] W. Huang, M. Ye, and B. Du. Learn from others and be yourself in heterogeneous federated learning. In CVPR , 2022. [61] W. Huang, M. Ye, Z. Shi, H. Li, and B. Du. Rethinking federated learning with domain shift: A prototype view. In CVPR , pages 16 312–16 322, 2023. [62] Wenke Huang, Mang Ye, Zekun Shi, Guancheng Wan, He Li, Bo Du, and Qiang Yang. Federated learning for generalization, robustness, fairness: A survey and benchmark. IEEE Transactions on Pattern Analysis & Machine Intelligence , 46(12):9387–9406, dec 2024. [63] J.J. Hull. A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis and Machine Intelligence , 16(5):550–554, 1994. 12 [64] H.Wang, Y .Li, W.Xu, R.Li, Y .Zhan, and Z.Zeng. Dafkd: Domain-aware federated knowledge distillation. InCVPR , 2023. [65] H.Yuan, W.Morningstar, L.Ning, and K.Singhal. What do we mean by generalization in federated learning? In ICLR , 2022. [66] H.Zhu, J.Xu, S.Liu, and Y .Jin. Federated learning on non-iid data: A survey. NC,
https://arxiv.org/abs/2505.16850v1
pages 371–390, 2021. [67] M. Jiang, H. R. Roth, W. Li, D. Yang, C. Zhao, V . Nath, D. Xu, Q. Dou, and Z. Xu. Fair federated medical image segmentation via client contribution estimation. In CVPR , 2023. [68] J.Kang, Z.Xiong, D.Niyato, H.Yu, Y .Liang, and D.I. Kim. Incentive design for efficient federated learning in mobile networks: A contract theory approach. In APWCS , pages 1–5, 2019. [69] J.Kim, G.Kim, and B.Han. Multi-level branched regularization for federated learning. In ICML , pages 11,058–11,073, 2022. [70] J.Park, D.-J. Han, M.Choi, and J.Moon. Sageflow: Robust federated learning against both stragglers and adversaries. In NeurIPS , pages 840–851, 2021. [71] J.Shao, Z.Li, W.Sun, T.Zhou, Y .Sun, L.Liu, Z.Lin, and J.Zhang. A survey of what to share in federated learning: Perspectives on model utility, privacy leakage, and communication efficiency. arXiv preprint arXiv:2307.10655 , 2023. [72] J.Shi, W.Wan, S.Hu, J.Lu, and L.Y . Zhang. Challenges and approaches for mitigating byzantine attacks in federated learning. In IEEE TrustCom , pages 139–146, 2022. [73] C. Ju, R. Zhao, J. Sun, X. Wei, B. Zhao, Y . Liu, H. Li, T. Chen, X. Zhang, D. Gao, et al. Privacy-preserving technology to help millions of people: Federated prediction model for stroke prevention. arXiv preprint arXiv:2006.10517 , 2020. [74] J.Yuan, X.Ma, D.Chen, F.Wu, L.Lin, and K.Kuang. Collaborative semantic aggregation and calibration for federated domain generalization. IEEE TKDE , 2023. [75] J.Zhang, C.Li, A.Robles-Kelly, and M.Kankanhalli. Hierarchically fair federated learning. arXiv , 2020. [76] Shivam Kalra, Junfeng Wen, Jesse C. Cresswell, Maksims V olkovs, and H. R. Tizhoosh. Decentralized federated learning through proxy model sharing. Nature Communications , 14(1):2899, may 2023. [77] Honggu Kang, Seohyeon Cha, and Joonhyuk Kang. Gefl: Model-agnostic federated learning with generative models, 2024. [78] S.P. Karimireddy, S.Kale, M.Mohri, S.J. Reddi, S.U. Stich, and A.T. Suresh. Scaffold: Stochastic controlled averaging for on-device federated learning. In ICML , 2020. [79] K.Hsieh, A.Phanishayee, O.Mutlu, and P.Gibbons. The non-iid data quagmire of decentralized machine learning. In ICML , pages 4387–4398, 2020. [80] Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. CoRR , abs/1312.6114, 2013. [81] J. Kone ˇcn`y, H.B. McMahan, D. Ramage, and P. Richtárik. Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 , 2016. [82] J. Kone ˇcn`y, H.B. McMahan, F.X. Yu, P. Richtárik, A.T. Suresh, and D. Bacon. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 , 2016. [83] S. Kotz, N. Balakrishnan, and N. L. Johnson. Continuous multivariate distributions, Volume 1: Models and applications , volume 1. John Wiley & Sons, 2004. [84] Mohamed Amine Kouda, Badis Djamaa, and Ali Yachir. An efficient federated learning solution for the artificial intelligence of things. Future Generation Computer Systems , 163:107533, 2025. [85] K.Pillutla, S.M. Kakade, and Z.Harchaoui. Robust aggregation for federated learning. IEEE TSP , 70:1142–1154, 2022. [86] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009. [87] L.Gao, H.Fu, L.Li, Y .Chen, M.Xu, and C.-Z. Xu. Feddc: Federated learning with non-iid data via local drift decoupling and correction. In CVPR , 2022. 13 [88] L.Huang, A.D. Joseph, B.Nelson, B.I.
https://arxiv.org/abs/2505.16850v1
Rubinstein, and J.D. Tygar. Adversarial machine learning. In ACM workshop on Security and artificial intelligence , pages 43–58, 2011. [89] D. Li and J. Wang. Fedmd: Heterogeneous federated learning via model distillation. In NeurIPS Workshop , 2019. [90] Qinbin Li, Bingsheng He, and Dawn Song. Model-contrastive federated learning. In CVPR , 2021. [91] T. Li, S. Hu, A. Beirami, and V . Smith. Ditto: Fair and robust federated learning through personalization. InICML , pages 6357–6368, 2021. [92] Tan Li, Samuel Cheng, Tak Lam Chan, and Haibo Hu. A polynomial proxy model approach to verifiable decentralized federated learning. Scientific Reports , 14(1):28786, November 2024. [93] Tian Li, Zhuang He, Virginia Song, and Ameet Talwalkar. Ditto: Fair and robust federated learning through personalization. arXiv preprint arXiv:2012.04221 , 2020. [94] Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. Federated optimization in heterogeneous networks. arXiv preprint arXiv:1812.06127 , 2018. [95] X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang. On the convergence of fedavg on non-iid data. arXiv preprint arXiv:1907.02189, 2019. [96] X.-C. Li and D.-C. Zhan. Fedrs: Federated learning with restricted softmax for label distribution non-iid data. In ACM SIGKDD , pages 995–1005, 2021. [97] Y . Li, F. Liang, L. Zhao, Y . Cui, W. Ouyang, J. Shao, F. Yu, and J. Yan. Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm. In ICLR , 2022. [98] I-Cheng Lin, Osman Yagan, and Carlee Joe-Wong. Fedspd: A soft-clustering approach for personalized decentralized federated learning, 2024. [99] T. Lin, L. Kong, S. U. Stich, and M. Jaggi. Ensemble distillation for robust model fusion in federated learning. In NeurIPS , pages 2351–2363, 2020. [100] Renpu Liu, Cong Shen, and Jing Yang. Federated representation learning in the under-parameterized regime. In Proceedings of the 41st International Conference on Machine Learning , ICML’24. JMLR.org, 2024. [101] Xiangyang Liu, Tianqi Pang, and Chenyou Fan. Federated prompting and chain-of-thought reasoning for improving llms answering. In Knowledge Science, Engineering and Management: 16th International Conference, KSEM 2023, Guangzhou, China, August 16–18, 2023, Proceedings, Part IV, page 3–11, Berlin, Heidelberg, 2023. Springer-Verlag. [102] Y . Liu, A. Huang, Y . Luo, H. Huang, Y . Liu, Y . Chen, L. Feng, T. Chen, H. Yu, and Q. Yang. Fedvision: An online visual object detection platform powered by federated learning. In AAAI , pages 13172–13179, 2020. [103] Y . Liu, Y . Kang, C. Xing, T. Chen, and Q. Yang. A secure federated transfer learning framework. IEEE TS, 35(4):70–82, 2020. [104] Y . Liu, Y . Kang, T. Zou, Y . Pu, Y . He, X. Ye, Y . Ouyang, Y .-Q. Zhang, and Q. Yang. Vertical federated learning. arXiv preprint arXiv:2211.12814, 2022. [105] Yuan Liu, Shu Wang, Zhe Qu, Xingyu Li, Shichao Kan, and Jianxin Wang. Fedgca: Global consis- tent augmentation based single-source federated domain generalization. In 2024 IEEE International Conference on Multimedia and Expo (ICME) , pages 1–6, 2024. [106] Zichang Liu, Zhaozhuo Xu, Benjamin Coleman, and Anshumali Shrivastava. One-pass distribution sketch for measuring data heterogeneity in federated learning. In A. Oh, T. Naumann, A. Globerson, K.
https://arxiv.org/abs/2505.16850v1
Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 15660–15679. Curran Associates, Inc., 2023. [107] L.Jiang and T.Lin. Test-time robust personalization for federated learning. In ICLR , 2023. [108] L.Lyu, H.Yu, and Q.Yang. Threats to federated learning: A survey. arXiv preprint arXiv:2003.02133 , 2020. [109] L.Lyu, H.Yu, X.Ma, C.Chen, L.Sun, J.Zhao, Q.Yang, and S.Y . Philip. Privacy and robustness in federated learning: Attacks and defenses. IEEE TNNLS , 2022. 14 [110] L.Lyu, J.Yu, K.Nandakumar, Y .Li, X.Ma, J.Jin, H.Yu, and K.S. Ng. Towards fair and privacy-preserving federated deep models. IEEE Transactions on Parallel and Distributed Systems , 31(11):2524–2541, 2020. [111] L.Lyu, X.Xu, Q.Wang, and H.Yu. Collaborative fairness in federated learning . 2020. [112] L.Lyu, Y .Li, K.Nandakumar, J.Yu, and X.Ma. How to democratise and protect ai: Fair and differentially private decentralised deep learning. IEEE TDSC , 19(2):1003–1017, 2020. [113] Luis M. Lopez-Ramos, Florian Leiser, Aditya Rastogi, Steven Hicks, Inga Strümke, Vince I. Madai, Tobias Budig, Ali Sunyaev, and Adam Hilbert. Interplay between federated learning and explainable artificial intelligence: a scoping review, 2025. [114] L.Zhang, X.Lei, Y .Shi, H.Huang, and C.Chen. Federated learning with domain generalization. arXiv preprint arXiv:2111.10487, 2021. [115] L.Zhang, Y .Luo, Y .Bai, B.Du, and L.-Y . Duan. Federated learning for non-iid data via unified feature learning and optimization objective alignment. In ICCV , pages 4420–4428, 2021. [116] C. May and S.K. Sell. Intellectual Property Rights: A Critical History . Lynne Rienner Publishers Boulder, 2006. [117] Connor Mclaughlin and Lili Su. Personalized federated learning via feature distribution adaptation. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [118] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B.A. y Arcas. Communication-efficient learning of deep networks from decentralized data. In AISTATS , pages 1273–1282, 2017. [119] H. Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS) , 2017. [120] M.Davis and M.Maschler. The kernel of a cooperative game. Naval Research Logistics Quarterly , 12(3):223–259, 1965. [121] M.Fang, X.Cao, J.Jia, and N.Z. Gong. Local model poisoning attacks to byzantine-robust federated learning. In USENIX , pages 1623–1640, 2020. [122] J. Miao, Z. Yang, L. Fan, and Y . Yang. Fedseg: Class-heterogeneous federated learning for semantic segmentation. In CVPR , pages 8042–8052, 2023. [123] M.Jiang, H.Yang, C.Cheng, and Q.Dou. Iop-fl: Inside-outside personalization for federated medical image segmentation. IEEE TMI , 2023. [124] M.Jiang, Z.Wang, and Q.Dou. Harmofl: Harmonizing local and global drifts in federated learning on heterogeneous medical images. In AAAI , pages 1087–1095, 2022. [125] M.Luo, F.Chen, D.Hu, Y .Zhang, J.Liang, and J.Feng. No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. In NeurIPS , 2021. [126] M.Mendieta, T.Yang, P.Wang, M.Lee, Z.Ding, and C.Chen. Local learning matters: Rethinking data heterogeneity in federated learning. In CVPR , pages 8397–8406, 2022. [127] M.Mohri, G.Sivek, and A.T. Suresh. Agnostic federated learning. In ICML , pages 4615–4625, 2019. [128] M.Simaan and J.B.J. Cruz. On the stackelberg strategy in nonzero-sum
https://arxiv.org/abs/2505.16850v1
games. JOTA , 11:533–555, 1973. [129] M.Ye, X.Fang, B.Du, P.C. Yuen, and D.Tao. Heterogeneous federated learning: State-of-the-art and research challenges. CSUR , 2023. [130] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y . Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011 , 2011. [131] D.C. Nguyen, Q.-V . Pham, P.N. Pathirana, M.Ding, A.Seneviratne, Z.Lin, O.Dobre, and W.-J. Hwang. Federated learning for smart healthcare: A survey. CSUR , pages 1–37, 2022. [132] Q. P. Nguyen, B. K. H. Low, and P. Jaillet. Trade-off between payoff and model rewards in shapley-fair collaborative machine learning. In NeurIPS , pages 30 542–30 553, 2022. 15 [133] N.Liu, Z.Liang, J.Lin, and Y .Liu. Patient clustering improves efficiency of federated machine learning to predict mortality and hospital stay time using distributed electronic medical records. Journal of Biomedical Informatics , 99:103291, 2019. [134] N.Shoham, T.Avidor, A.Keren, N.Israel, D.Benditkis, L.Mor-Yosef, and I.Zeitak. Overcoming forgetting in federated learning on non-iid data. In NeurIPS Workshop , 2019. [135] I.C. of Investigators for Fairness in Trial Data Sharing. Toward fairness in data sharing. New England Journal of Medicine , 375(5):405–407, 2016. [136] M.S. Ozdayi, M.Kantarcioglu, and Y .R. Gel. Defending against backdoors in federated learning with robust learning rate. In AAAI , pages 9268–9276, 2021. [137] S.L. Pardau. The california consumer privacy act: Towards a european-style privacy regime in the united states. J. Tech. L. & Pol’y , 23:68, 2018. [138] S. Pati, U. Baid, B. Edwards, M. Sheller, S.-H. Wang, G.A. Reina, P. Foley, A. Gruzdev, D. Karkada, C. Davatzikos, et al. Federated learning enables big data for rare cancer boundary detection. Nature Communications , 13(1):7346, 2022. [139] B. Peng, M. Chi, and C. Liu. Non-iid federated learning via random exchange of local feature maps for textile iiot secure computing. SCIS , 65(7):170302, 2022. [140] P.Kairouz, H.B. McMahan, B.Avent, A.Bellet, M.Bennis, A.N. Bhagoji, K.Bonawitz, Z.Charles, G.Cormode, R.Cummings, and et al. Advances and open problems in federated learning. Found. Trends Mach. Learn. , pages 1–210, 2021. [141] P.Tian, Z.Chen, W.Yu, and W.Liao. Towards asynchronous federated learning based threat detection: A dc-adam approach. Computers & Security , 108:102344, 2021. [142] Q.Li, B.He, and D.Song. Model-contrastive federated learning. In CVPR , pages 10,713–10,722, 2021. [143] Q.Li, B.He, and D.Song. Adversarial collaborative learning on non-iid features. In ICML , 2023. [144] Q.Li et al. A survey on federated learning systems: vision, hype and reality for data privacy and protection. IEEE TKDE , 2021. [145] Q.Li, Y .Diao, Q.Chen, and B.He. Federated learning on non-iid data silos: An experimental study. IEEE TKDE , 2022. [146] Q.Liu, C.Chen, J.Qin, Q.Dou, and P.-A. Heng. Feddg: Federated domain generalization on medical image segmentation via episodic learning in continuous frequency space. In CVPR , pages 1013–1023, 2021. [147] Z. Qu, K. Lin, J. Kalagnanam, Z. Li, J. Zhou, and Z. Zhou. Federated learning’s blessing: Fedavg has linear speedup. In ICLR , 2021. [148] Q.Xia, Z.Tao, Z.Hao, and Q.Li. Faba: an algorithm for fast aggregation against byzantine attacks in distributed neural networks. In IJCAI
https://arxiv.org/abs/2505.16850v1
, 2019. [149] A. Radford, J.W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, and J. Clark et al. Learning transferable visual models from natural language supervision. In ICML , pages 8748–8763, 2021. [150] T.H. Rafi, F.A. Noor, T.Hussain, and D.-K. Chae. Fairness and privacy-preserving in federated learning: A survey. arXiv preprint arXiv:2306.08402 , 2023. [151] Alain Rakotomamonjy, Kimia Nadjahi, and Liva Ralaivola. Federated wasserstein distance. In The Twelfth International Conference on Learning Representations , 2024. [152] S. Reddi, Z. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Kone ˇcný, S. Kumar, , and H. B. McMahan. Adaptive federated optimization. In ICLR , 2021. [153] R.Guerraoui, S.Rouault, et al. The hidden vulnerability of distributed learning in byzantium. In ICML , pages 3521–3530, 2018. [154] N. Rodríguez-Barroso, D. Jiménez-López, M. V . Luzón, F. Herrera, and E. Martínez-Cámara. Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges. Information Fusion , 90:148–173, 2023. 16 [155] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. Imagenet large scale visual recognition challenge. IJCV , pages 211–252, 2015. [156] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision , 115:211–252, 2015. [157] R.Zhang, Q.Xu, J.Yao, Y .Zhang, Q.Tian, and Y .Wang. Federated domain generalization with generaliza- tion adjustment. In CVPR , pages 3954–3963, 2023. [158] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In Proceedings of the 11th European Conference on Computer Vision: Part IV , ECCV’10, page 213–226, Berlin, Heidelberg, 2010. Springer-Verlag. [159] S. Saha and T. Ahmad. Federated transfer learning: Concept and applications. IA, 15(1):35–44, 2021. [160] Jiawei Shao, Zijian Li, Wenqiang Sun, Tailin Zhou, Yuchang Sun, Lumin Liu, Zehong Lin, Yuyi Mao, and Jun Zhang. A survey of what to share in federated learning: Perspectives on model utility, privacy leakage, and communication efficiency, 2024. [161] L.S. Shapley. A value for n-person games , volume 69. 1997. [162] V . Shejwalkar and A. Houmansadr. Manipulating the byzantine: Optimizing model poisoning attacks and defenses for federated learning. In NDSS , 2021. [163] Jaemin Shin, Yuanchun Li, Yunxin Liu, and Sung-Ju Lee. Fedbalancer: data and pace control for efficient federated learning on heterogeneous clients. In Proceedings of the 20th Annual International Conference on Mobile Systems, Applications and Services , MobiSys ’22, page 436–449, New York, NY , USA, 2022. Association for Computing Machinery. [164] Milad Soltany, Farhad Pourpanah, Mahdiyar Molahasani, Michael Greenspan, and Ali Etemad. Federated domain generalization with label smoothing and balanced decentralized training. In ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) , pages 1–5, 2025. [165] Jie Song, Li Wang, Hao Liu, et al. Fuzzfl: A fuzzy logic-based federated learning framework. IEEE Transactions on Fuzzy Systems , 29(5):1000–1012, 2021. [166] J. Sun, T. Chen, G.B. Giannakis,
https://arxiv.org/abs/2505.16850v1
Q. Yang, and Z. Yang. Lazily aggregated quantized gradient innovation for communication-efficient federated learning. IEEE PAMI , 44(4):2031–2044, 2020. [167] Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan. Can you really backdoor federated learning? In NeurIPS , 2019. [168] Y . Tan, G. Long, L. Liu, T. Zhou, Q. Lu, J. Jiang, and C. Zhang. Fedproto: Federated prototype learning across heterogeneous clients. In AAAI , 2022. [169] Asadullah Tariq, Mohamed Adel Serhani, Farag M. Sallabi, Ezedin S. Barka, Tariq Qayyum, Heba M. Khater, and Khaled A. Shuaib. Trustworthy federated learning: A comprehensive review, architecture, key challenges, and future research prospects. IEEE Open Journal of the Communications Society , 5:4920–4998, 2024. [170] T.Gu, K.Liu, B.Dolan-Gavitt, and S.Garg. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access , 7:47,230–47,244, 2019. [171] Dipanwita Thakur, Antonella Guzzo, Giancarlo Fortino, and Sajal K. Das. Non-convex optimization in federated learning via variance reduction and adaptive learning, 2024. [172] Tra Huong Thi Le, Nguyen H. Tran, Yan Kyaw Tun, Minh N. H. Nguyen, Shashi Raj Pandey, Zhu Han, and Choong Seon Hong. An incentive mechanism for federated learning in wireless cellular networks: An auction approach. Trans. Wireless. Comm. , 20(8):4874–4887, August 2021. [173] T.Li, A.K.Sahu, M.Zaheer, M.Sanjabi, A.Talwalkar, and V .Smith. Federated optimization in heteroge- neous networks. In MLSys , 2020. [174] T.Li, M.Sanjabi, A.Beirami, and V .Smith. Fair resource allocation in federated learning. In ICLR , 2020. [175] T.Li, A.K. Sahu, A. Talwalkar, and V . Smith. Federated learning: Challenges, methods, and future directions. IEEE SPM , pages 50–60, 2020. 17 [176] T.Song, Y .Tong, and S.Wei. Profit allocation for federated learning. In IEEE Big Data , pages 2577–2586, 2019. [177] T.Yoon, S.Shin, S.J. Hwang, and E.Yang. Fedmix: Approximation of mixup under mean augmented federated learning. In ICLR , 2021. [178] T.Zhou and E.Konukoglu. FedFA: Federated feature augmentation. In ICLR , 2023. [179] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In NeurIPS , 2017. [180] V .Kulkarni, M.Kulkarni, and A.Pant. Survey of personalization techniques for federated learning. In WorldS4 , pages 794–797, 2020. [181] V .Mothukuri, R.M. Parizi, S.Pouriyeh, Y .Huang, A.Dehghantanha, and G.Srivastava. A survey on security and privacy of federated learning. Future Generation Computer Systems , pages 619–640, 2021. [182] P. V oigt and A. V on dem Bussche. The EU General Data Protection Regulation (GDPR) . Springer International Publishing, 1st edition, 2017. [183] O.A. Wahab, A.Mourad, H.Otrok, and T.Taleb. Federated machine learning: Survey, multi-level classifi- cation, desirable criteria and future directions in communication and networking systems. IEEE CST , pages 1342–1397, 2021. [184] J. Wang, Q. Liu, H. Liang, G. Joshi, and H. V . Poor. Tackling the objective inconsistency problem in heterogeneous federated optimization. In NeurIPS , pages 7611–7623, 2020. [185] Z Wang, Y .Zhu, D.Wang, and Z.Han. Fedacs: Federated skewness analytics in heterogeneous decentral- ized data environments. In IWQOS , pages 1–10, 2021. [186] K. Wei, J. Li, C. Ma, M. Ding, S. Wei, F. Wu, G. Chen, and T. Ranbaduge. Vertical federated learning:
https://arxiv.org/abs/2505.16850v1
Challenges, methodologies and experiments. arXiv preprint arXiv:2202.04309, 2022. [187] Herbert Woisetschläger, Alexander Erben, Shiqiang Wang, Ruben Mayer, and Hans-Arno Jacobsen. Federated fine-tuning of llms on the very edge: The good, the bad, the ugly. In Proceedings of the Eighth Workshop on Data Management for End-to-End Machine Learning , SIGMOD/PODS ’24, page 39–50. ACM, June 2024. [188] Di Wu, Jun Bai, Yiliao Song, Junjun Chen, Wei Zhou, Yong Xiang, and Atul Sajjanhar. Fedinverse: Evaluating privacy leakage in federated learning. In The Twelfth International Conference on Learning Representations , 2024. [189] G. Wu and S. Gong. Decentralised learning from independent multi-domain labels for person re- identification. In AAAI , pages 2898–2906, 2021. [190] W.Zheng, L. Yan, C. Gou, and F.-Y . Wang. Federated meta-learning for fraudulent credit card detection. InIJCAI , pages 4654–4660, 2021. [191] X.Cao, J.Jia, and N.Z. Gong. Provably secure federated learning against malicious clients. In AAAI , pages 6885–6893, 2021. [192] X.Cao, M.Fang, J.Liu, and N.Z. Gong. Fltrust: Byzantine-robust federated learning via trust bootstrapping. InNDSS , 2021. [193] X.Chen, C.Liu, B.Li, K.Lu, and D.Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 , 2017. [194] Tong Xia, Abhirup Ghosh, Xinchi Qiu, and Cecilia Mascolo. Flea: Addressing data scarcity and label skew in federated learning via privacy-preserving feature augmentation. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , KDD ’24, page 3484–3494, New York, NY , USA, 2024. Association for Computing Machinery. [195] Han Xiao, Kashif Rasul, and Roland V ollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747 , 2017. [196] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He. Aggregated residual transformations for deep neural networks. In CVPR , pages 1492–1500, 2017. [197] X.Li, M.Jiang, X.Zhang, M.Kamp, and Q.Dou. Fed{bn}: Federated learning on non-{iid} features via local batch normalization. In ICLR , 2021. 18 [198] X.Liu et al. Unifed: A benchmark for federated learning frameworks. arXiv preprint arXiv:2207.10308 , 2022. [199] X.Liu, W.Xi, W.Li, D.Xu, G.Bai, and J.Zhao. Co-mda: Federated multi-source domain adaptation on black-box models. IEEE TCSVT , 2023. [200] X.Lyu, Y .Han, W.Wang, J.Liu, B.Wang, J.Liu, and X.Zhang. Poisoning with cerberus: stealthy and colluded backdoor attack against federated learning. In AAAI , 2023. [201] X.Ma, J.Zhu, Z.Lin, S.Chen, and Y .Qin. A state-of-the-art survey on solving non-iid data in federated learning. Future Generation Computer Systems , 135:244–258, 2022. [202] X.Mu, Y .Shen, K.Cheng, X.Geng, J.Fu, T.Zhang, and Z.Zhang. Fedproc: Prototypical contrastive federated learning on non-iid data. arXiv preprint arXiv:2109.12273 , 2021. [203] X.Pan, P.Luo, J.Shi, and X.Tang. Two at once: Enhancing learning and generalization capacities via ibn-net. In ECCV , pages 464–479, 2018. [204] X.Peng, Z.Huang, Y .Zhu, and K.Saenko. Federated adversarial domain adaptation. In ICLR , 2020. [205] Yifan Xu, Wei Wang, Yujing Zhang, et al. Fedad: Federated learning via attention distillation. arXiv preprint arXiv:2203.09553 , 2022. [206] X.Xu, H.Li, Z.Li, and X.Zhou. Safe: Synergic data filtering for federated learning in cloud-edge computing. IEEE TII , 19(2):1655–1665, 2022. [207] X.Xu, L.Lyu, X.Ma, C.Miao, C.S. Foo, and B.K.H. Low. Gradient
https://arxiv.org/abs/2505.16850v1
driven rewards to guarantee fairness in collaborative machine learning. In NeurIPS , volume 34, pages 16,104–16,117, 2021. [208] X.Zhang, F.Li, Z.Zhang, Q.Li, C.Wang, and J.Wu. Enabling execution assurance of federated learning at untrusted participants. In INFOCOM , pages 1877–1886. IEEE, 2020. [209] H. Yang, J. Yuan, C. Li, G. Zhao, Z. Sun, Q. Yao, B. Bao, A.V . Vasilakos, and J. Zhang. Brainiot: Brain- like productive services provisioning with federated learning in industrial iot. IEEE IoT-J , 9(3):2014–2024, 2021. [210] Q. Yang, Y . Liu, T. Chen, and Y . Tong. Federated machine learning: Concept and applications. ACM TIST , pages 1–19, 2019. [211] Qiantao Yang, Xuehui Du, Xiangyu Wu, Wenjuan Wang, Aodi Liu, and Shihao Wang. Fedrtid: an efficient shuffle federated learning via random participation and adaptive time constraint. Cybersecurity , 7(1):76, 2024. [212] Y .Dandi, L.Barba, and M.Jaggi. Implicit gradient alignment in distributed and federated learning. In AAAI , pages 6454–6462, 2022. [213] Gokul Yenduri, M. Ramalingam, G. Chemmalar Selvi, Y . Supriya, Gautam Srivastava, Praveen Ku- mar Reddy Maddikunta, G. Deepti Raj, Rutvij H. Jhaveri, B. Prabadevi, Weizheng Wang, Athanasios V . Vasilakos, and Thippa Reddy Gadekallu. Gpt (generative pre-trained transformer)— a comprehensive review on enabling technologies, potential applications, emerging challenges, and future directions. IEEE Access , 12:54608–54649, 2024. [214] Y .LeCun, L.Bottou, Y .Bengio, and P.Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE , pages 2278–2324, 1998. [215] Y .Li, X.Wang, R.Zeng, P.K. Donta, I.Murturi, M.Huang, and S.Dustdar. Federated domain generalization: A survey. arXiv preprint arXiv:2306.01334 , 2023. [216] Y .Netzer, T.Wang, A.Coates, A.Bissacco, B.Wu, and A.Y . Ng. Reading digits in natural images with unsupervised feature learning. In NeurIPS Workshop , 2011. [217] J. H. Yoo, H. Jeong, J. Lee, and T.-M. Chung. Federated learning: Issues in medical application. In FDSE , pages 3–22, 2021. [218] Y .Ovadia, E.Fertig, J.Ren, Z.Nado, D.Sculley, S.Nowozin, J.Dillon, B.Lakshminarayanan, and J.Snoek. Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. In NeurIPS , volume 32, 2019. [219] Y .Shi, H.Yu, and C.Leung. Towards fairness-aware federated learning. IEEE TNNLS , 2023. 19 [220] Y .Shi, J.Liang, W.Zhang, V .Y . Tan, and S.Bai. Towards understanding and mitigating dimensional collapse in heterogeneous federated learning. In ICLR , 2023. [221] Y .Tan, Y .Liu, G.Long, J.Jiang, Q.Lu, and C.Zhang. Federated learning on non-iid graphs via structural knowledge sharing. In AAAI , 2023. [222] F. Yu, W. Zhang, Z. Qin, Z. Xu, D. Wang, C. Liu, Z. Tian, and X. Chen. Fed2: Feature-aligned federated learning. In ACM SIGKDD , pages 2066–2074, 2021. [223] Y .Wei, L.Yang, Y .Han, and Q.Hu. Multi-source collaborative contrastive learning for decentralized domain adaptation. IEEE Transactions on Circuits and Systems for Video Technology , 2022. [224] Y .Wei and Y .Han. Exploring instance relation for decentralized multi-source domain adaptation. In ICASSP , pages 1–5, 2023. [225] Y .Wu, Y .Kang, J.Luo, Y .He, and Q.Yang. Fedcg: Leverage conditional gan for protecting privacy and maintaining competitive performance in federated learning. In IJCAI , pages 2334–2340, 2022. [226] Y .Zhao. Federated learning with non-iid data. arXiv preprint
https://arxiv.org/abs/2505.16850v1
arXiv:1806.00582 , 2018. [227] E.L. Zec, J.Martinsson, O.Mogren, L.R. Sütfeld, and D.Gillblad. Specialized federated learning using mixture of experts. arXiv preprint arXiv:2010.02056 , 2020. [228] L.-L. Zeng, Z.Fan, J.Su, M.Gan, L.Peng, H.Shen, and D.Hu. Gradient matching federated domain adaptation for brain image classification. IEEE TNNLS , 2022. [229] J. Zhang, Z. Li, B. Li, J. Xu, S. Wu, S. Ding, and C. Wu. Federated learning with label distribution skew via logits calibration. In ICML , pages 26 311–26 329, 2022. [230] Junyuan Zhang, Shuang Zeng, Miao Zhang, Runxi Wang, Feifei Wang, Yuyin Zhou, Paul Pu Liang, and Liangqiong Qu. Flhetbench: Benchmarking device and state heterogeneity in federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 12098– 12108, 2024. [231] Kai Zhang, Yu Wang, Hongyi Wang, Lifu Huang, Carl Yang, Xun Chen, and Lichao Sun. Efficient federated learning on knowledge graphs via privacy-preserving relation embedding aggregation. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Findings of the Association for Computational Lin- guistics: EMNLP 2022 , pages 613–621, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. [232] R. Zhang, X. Hu, B. Li, S. Huang, H. Deng, Y . Qiao, P. Gao, and H. Li. Prompt, generate, then cache: Cascade of foundation models makes strong few-shot learners. In CVPR , pages 15211–15222, 2023. [233] Z. Zhou, S. S. Azam, C. Brinton, and D. I. Inouye. Efficient federated domain translation. In ICLR , 2023. [234] Z.Hu, K.Shaloudegi, G.Zhang, and Y .Yu. Fedmgda+: Federated learning meets multi-objective optimiza- tion. corr abs/2006.11489 (2020). IEEE TNSE , 2020. [235] M.H. Zhu, L.N. Ezzine, D.Liu, and Y .Bengio. Fedilc: Weighted geometric mean and invariant gradient covariance for federated learning on non-iid data. arXiv preprint arXiv:2205.09305, 2022. [236] W. Zhuang, Y . Wen, X. Zhang, X. Gan, D. Yin, D. Zhou, S. Zhang, and S. Yi. Performance optimization of federated person re-identification via benchmark analysis. In ACM MM , pages 955–963, 2020. [237] Z.Niu, H.Wang, H.Sun, S.Ouyang, Y . w. Chen, and L.Lin. Mckd: Mutually collaborative knowledge distillation for federated domain adaptation and generalization. In ICASSP , pages 1–5, 2023. [238] Z.Qu, X.Li, R.Duan, Y .Liu, B.Tang, and Z.Lu. Generalized federated learning via sharpness aware minimization. In ICML , 2022. [239] Z.Wang, X.Fan, J.Qi, C.Wen, C.Wang, and R.Yu. Federated learning with fair averaging. In IJCAI , 2021. [240] Z.Zhang, Q.Su, and X.Sun. Dim-krum: Backdoor-resistant federated learning for nlp with dimension-wise krum-based aggregation. In EMNLP , 2022. [241] Z.Zhu, J.Hong, and J.Zhou. Data-free knowledge distillation for heterogeneous federated learning. In ICML , pages 12,878–12,889, 2021. 20 Appendix to ATR-Bench: A Federated Learning Benchmark for Adaptation, Trust, and Reasoning Sec. A discuss background thoroughly Sec. B discuss hyperparameters Sec. C discuss prior work extensively Sec. D depicts the outline of ATR-Bench Sec. E provides additional details on benchmark metrics Sec. F provides additional details experimental setup Sec. G provides additional details future directions A Background A.1 History and Terminology Federated learning enables multiple parties to jointly train a shared model without exchanging their raw data, preserving privacy and reducing communication overhead.
https://arxiv.org/abs/2505.16850v1
Early formulations include client- server optimization schemes and federated averaging algorithms [ 81,82,118]. Depending on how data are partitioned across participants, FL methods are typically divided into three paradigms [ 66, 154]: •Horizontal Federated Learning (HFL) : All clients hold data with the same feature space but on different samples. They collaboratively update a global model by sharing parameter updates while keeping each local dataset private [118, 122]. •Vertical Federated Learning (VFL) : Participants possess complementary features for the same set of entities. Secure protocols are used to jointly compute model updates on aligned samples without revealing individual feature values [104, 186]. •Federated Transfer Learning (FTL) : When both feature spaces and sample sets dif- fer across clients, FTL applies transfer learning techniques—such as knowledge distil- lation or representation mapping—to enable knowledge sharing between heterogeneous domains [103, 159]. In this work, we concentrate on four fundamental properties of horizontal federated learning (HFL)2 and present a unified evaluation framework under the HFL setting: Generalization (GFL). Due to the non-IID nature of client data, federated models must contend with both cross-client distribution shifts—where local empirical risks diverge across participants—and out-of-client distribution shifts, which capture discrepancies between seen and unseen client populations [ 95,204,146]. These phenomena hinder both convergence speed and test-time performance when models are deployed on new or held-out clients. Robustness (RFL). Federated learning’s decentralized paradigm exposes it to adversarial manipulation. On one hand, Byzantine attacks corrupt either local training data or uploaded updates to derail global aggregation [ 88,26]. On the other, backdoor attacks stealthily inject triggers into client updates so that the global model behaves normally on benign inputs but misclassifies targeted samples [ 167,36]. Fairness (FFL). Equitable participation and performance are critical to sustain federated collaborations. Collaborative fairness addresses how to reward clients proportionally to their computational effort and data value [ 176,132], while performance fairness 2We use “HFL” to denote horizontal federated learning. 21 ensures that the global model does not systematically underperform on underrepresented or marginal- ized client distributions [ 127,18]. Reasoning (RFL). As federated models are increasingly applied to complex, multimodal tasks—such as edge-device vision or medical diagnostics, they must support structured, multi-step reasoning across distributed and heterogeneous data sources. Constraints on bandwidth and privacy make it challenging to share full context, so federated systems require mechanisms for distributed reasoning and privacy-preserving fine-tuning that maintain coherent inference chains under Non-IID conditions [ 91]. By benchmarking these three axes: generalization, robustness, reasoning under a common HFL protocol, we aim to provide a comprehensive assessment of federated methods and elucidate their trade-offs for real-world, privacy-sensitive deployments. A.2 Problem Formulation We consider a horizontal federated learning setting with Mclients, indexed by i=1, . . . , M , each holding a private dataset Diof size Ni=|Di|. Each example (x, y)∈ Diis drawn from a client- specific distribution Pi(x, y). Our goal is to train a shared model w=f◦g, where f:X →Rdis a feature extractor mapping inputs xtod-dimensional embeddings h=f(x), andg:Rd→R|C|is a classifier producing logits z=g(h)over the label set C. Federated learning seeks the global parameter w∗that minimizes a weighted combination
https://arxiv.org/abs/2505.16850v1
of local empirical risks: w∗= argmin wMX i=1αiLi(w;Di), (1) whereLi(w;Di)=1 NiP (x,y)∈Diℓ g(f(x)), y is the average loss on client i, and the mixing weights satisfyP iαi= 1(commonly αi=Ni/P jNjorαi= 1/M). Training proceeds in communication rounds, each consisting of three phases: 1. Broadcast: w(t) i=w(t−1)∀i, 2. Local Update: w(t) i←argmin wiE(x,y)∼Di ℓ g(f(x;wi)), y , 3. Aggregation: w(t)=MX i=1αiw(t) i.(2) Here, step 1 distributes the current global model to all clients; step 2 performs one or more epochs of local optimization (e.g. via SGD) on each Di; and step 3 fuses client updates into the new global model. This iterative protocol continues until convergence or a stopping criterion is met [ 118,173].“‘ A.2.1 Data Heterogeneity in Federated Learning In real-world federated setups, each client’s dataset Diis drawn from its own distribution Pi(x, y), leading to non-IID data across the network [ 173,146,147]. We often decompose Pi(x, y) = Pi(y)Pi(x|y)and distinguish two principal forms of heterogeneity: •Label shift: Clients differ in their label marginals but share the same class-conditional features: Pi(y)̸=Pj(y),Pi(x|y) =Pj(x|y). A common simulation uses Dirichlet sampling [83] to skew Pi(y). •Feature shift: All clients have the same label distribution but observe different feature patterns for each class: Pi(y) =Pj(y),Pi(x|y)̸=Pj(x|y). This arises, for example, when imaging devices vary across hospitals [197]. Beyond these in-network shifts, out-of-client shift refers to the performance degradation when deploying the federated model on entirely new data sources Po(x, y)̸=Pi(x, y), despite matching label marginals: Po(y) =Pi(y),Po(x|y)̸=Pi(x|y). 22 Such unseen domain shifts underscore the need for federated methods that generalize beyond the participating clients [65]. A.2.2 Adversarial Threats in Federated Learning In federated settings, untrusted participants may launch attacks that compromise model integrity. We categorize these into two broad classes: 1. Byzantine (Untargeted) Attacks Here, adversaries aim to simply degrade overall model accuracy without a specific target outcome [8, 153, 26]. Two common strategies are: •Data Poisoning: Malicious clients corrupt their local training data before participating. For example, in symmetric label noise (SymFlip), each label is flipped to any other class with equal probability ϵ/(|C|−1): Tsym(i, j) =( 1−ϵ i =j, ϵ |C|−1i̸=j, while in pair-flip noise (PairFlip) labels are only swapped among semantically similar classes [9, 4]. •Model Poisoning: Rather than tampering with data, adversaries directly alter their client updates. Examples include: –Random-Noise: Substituting the true gradient ∇kwith random values (e.g., Gaussian noise). –Lie Attack: Crafting updates just beyond detection thresholds by adding a small multiple of the benign update standard deviation [44]. –Optimization-Aware Poisoning: Solving a max-loss subproblem to push the global model away from its benign update trajectory [121]. –MinMax/MinSum Attacks: Adjusting the poisoned update so that its maximum (or sum) distance to benign updates remains within the natural benign update spread [162]. 2. Backdoor (Targeted) Attacks Here, the attacker embeds a hidden trigger so that when specific patterns are present, the global model misclassifies inputs into a chosen target label, while preserving normal performance otherwise [ 193,17]. Concretely, poisoned clients mix a trigger mask mand pattern Φinto a fraction of their examples: ex= (1−m)⊙x+m⊙Φ, and optimize a combined loss: E(x,y)∼Di L(wi, x, y) +λE(ex,yt)
https://arxiv.org/abs/2505.16850v1
L(wi,ex, yt) , where ytis the attacker-specified target class and λ≥0balances backdoor potency against clean-data fidelity. Recent work has shown that distributing trigger fragments across multiple malicious clients can evade standard defenses [21, 200]. A.2.3 Client Incentives and Fairness Federated learning relies on voluntary participation of clients with heterogeneous data and compute resources. To maintain long-term engagement and equitable outcomes, two primary fairness concerns must be addressed: Reward Allocation (Reward Conflict) Clients incur varying costs (e.g., data labeling, computation) and contribute unequally to the global model’s performance [ 208,219]. A fair compensation scheme should grant higher rewards to those whose participation yields larger marginal gains. We adopt the Shapley Value from cooperative game theory [161, 5, 120] to quantify each client’s contribution: νi=ρ MX S⊆{1,...,M}\{i}A wS∪{i}, u −A wS, u M−1 |S| , where A(wS, u)is the model accuracy on test set uwhen trained on clients in S, and ρ >0scales the values. 23 Prediction Consistency (Prediction Biases) Data heterogeneity can cause the global model to perform well on some client domains but poorly on others, leading to prediction bias [ 127,174]. We measure this by the standard deviation of per-domain accuracies: ζ= StdDev {A(w,u)}u∈U , whereUis the set of evaluation domains. Lower ζindicates more uniform performance, while higher ζsignals greater disparity among client groups. B Hyperparameters Table 9: Selected hyper-parameters for the various evaluated methods. Note that similar symbols may represent different concepts across different approaches. Detailed explanations are provided in § F.2. Method Hyper-Parameters General FL Methods (Generalizable Federated Learning) . .§ 2 FedProx [173] Proximal term µ= 0.01 SCAFFOLD [78] Server-side learning rate lr= 0.25 FedProc [202] Contrastive temperature τ= 1.0 MOON [142] τ= 0.5(temp), µ= 1.0(proximal) FedRS [96] Scaling factor α= 0.5 FedDyn [202] Regularization strength α= 0.5 FedOpt [152] Global optimizer LR ηg= 0.5 FedProto [168] Prototype regularizer λ= 2 FedLC [229] Scaling factor τ= 0.5 FedDC [87] Penalty weight α= 0.1 FedNTD [45] Temp τ= 1, Reg weight β= 1 FPL [61] Contrastive temperature τ= 0.02 KD3A [55] Confidence gate g∈[0.9,0.95] Robust FL Methods (Robust Federated Learning) . . . . . § 3 Multi-Krum [8] Byzantine tolerance Υ<50% , Top-K: 5 Bulyan [153] Byzantine tolerance Υ<50% Trimmed Mean [35] Evil client ratio Υ<50% FoolsGold [42] Stability threshold ϵ= 10−5 DnC [162] Sub-dim b= 1000 , filter ratio c= 1.0 FLTrust [192] Public epochs E= 20 SageFlow [70] Threshold Eth= 2.2, exponent δ= 5 RFA [85] Iterations E= 3 RLR [136] LR lr= 1.0, threshold τ= 4.0 CRFL [22] Norm threshold ρ= 15 , smoothing σ= 0.01 Fairness-Oriented FL Methods (Fair Federated Learning) . § 3 AFL [127] Regularization coefficient γ= 0.01 C Related Work Federated learning (FL) has spawned numerous survey papers in recent years. Early overviews [ 210, 175,183,144,140,154] lay out the high-level principles and system challenges, but typically do not delve into detailed algorithmic solutions for specific FL problems. A large body of work addresses distributional heterogeneity in FL. Several surveys [ 226,66,145,129,215] categorize approaches for label skew, feature skew, and concept drift between clients, and compare client- level strategies such as local
https://arxiv.org/abs/2505.16850v1
regularization [ 173], personalized layers [ 100], and meta-learning [ 38]. Domain adaptation in FL—where some target domain data are available during training—is surveyed in [204,65], highlighting adversarial alignment [ 46] and feature disentanglement [ 50]. Out-of- distribution generalization methods, which aim to perform well on unseen client distributions, are comparatively less reviewed but include invariant optimization [ 146] and robust aggregation schemes [ 34]. FL’s distributed nature makes it vulnerable to Byzantine and backdoor attacks. Surveys on adversarial threats [ 108,72,71] classify untargeted data and model poisoning (e.g., [ 8,153]) and targeted backdoors [ 167,36]. Defense surveys [ 181] compare robust aggregation, anomaly detection, and certified defenses [ 191,22]. Fairness in FL encompasses both equitable performance across client groups and fair reward allocation. Recent reviews [ 150,219] discuss methods that enforce uniform 24 accuracy via min–max optimization [ 127,174] or multi-objective updates [ 234]. Client-level incentive mechanisms based on reputations [ 111] and data valuation via Shapley approximations [ 207,67] are surveyed in [ 144]. As FL moves into high-stakes domains, model transparency and reasoning become critical. While most surveys focus on performance, a few emerging works [ 101] explore integrating chain-of-thought explanations into FL, and others [ 165] survey symbolic and knowledge-graph based federated models. However, there is no comprehensive survey that brings together domain adaptation, generalization, robustness, fairness, and reasoning under a unified evaluation framework. To fill these gaps, we present the first holistic survey and benchmark that jointly examines domain adaptation , OOD generalization ,adversarial robustness ,fairness , and reasoning in FL. We systematically categorize state-of-the-art methods in each dimension and provide a unified empirical comparison across common benchmarks, offering both breadth and depth for researchers and practitioners. D Outline ATR-Bench framework is organized around five key components, each addressing a foundational aspect of federated learning. Adaptation focuses on techniques that enhance generalization across diverse clients, including regularization, augmentation, and cross-domain calibration. Trust centers on robustness in adversarial and unreliable environments, covering Byzantine resilience and fairness across both collaboration and performance. Reasoning introduces an emerging line of research into interpretability and symbolic-neural integrations, highlighting methods like distillation with inter- pretability and knowledge-driven aggregation. The Benchmark module formalizes these dimensions through standardized evaluations, enabling consistent comparisons across methods and datasets. Finally, Outlook offers reflective insights, summarizing experimental findings and outlining future research opportunities. Together, these pillars form a structured foundation for evaluating, comparing, and advancing federated learning in real-world settings. E Benchmark Metrics E.1 Generalization Metrics We evaluate a federated model’s ability to handle distribution shifts in two scenarios: cross-client and out-of-distribution . Cross-Client Accuracy. Under cross-client heterogeneity, each client’s test set umay follow a different distribution. We measure the standard Top-1 accuracy on each uas Au=1 |u|X (x,y)∈u1{arg max w(x) =y}, 25 and report the mean over a collection of held-out client sets Uvia AU=1 |U|X u∈UAu. Results across held-out clients under various distribution shifts are summarized in Table 10. Out-of-Distribution Accuracy. To assess performance on entirely unseen domains, we compute Top-1 accuracy on a designated OOD test set O: AO=1 |O|X (x,y)∈O1{arg max w(x) =y}. E.2 Robustness
https://arxiv.org/abs/2505.16850v1
Metrics In federated learning, adversarial participants can undermine the shared model through untargeted (Byzantine) or targeted (backdoor) manipulations. We quantify defense effectiveness with two key metrics: Accuracy Degradation ( I).For Byzantine resilience, compare the model’s clean accuracy Aclean on domain uagainst its accuracy under attack Abyz. The degradation I=Aclean−Abyz measures how much performance is lost due to malicious updates. Backdoor Success Rate ( R).To assess backdoor defenses, we inject a trigger into each test sample, yielding (˜x,˜y), and record the fraction that the global model misclassifies as the attacker’s target label: R=1 |˜T|X (˜x,˜y)∈˜T1 arg max w(˜x) = ˜y , where ˜Tis the set of all poisoned examples. Table 10: Performance under Out-Client Shift setting, reported using the metric AO, across the Office Caltech, Digits, PACS, and Office31 benchmarks. Refer to § 5.1 for detailed analysis. Office Caltech Digits PACS Office31Methods→Ca→Am→W→DA VG→M→U→Svz→SyA VG →P→AP→Ct→SkA VG→D→Am→WA VG FedAvg [118] 58.12 67.47 43.10 80.00 62.17 32.60 47.20 13.91 13.54 26.81 52.28 46.16 60.74 51.12 52.57 14.28 8.93 21.51 14.90 FedProx [173] 56.60 69.26 42.41 85.33 63.40 23.54 60.28 15.83 13.78 28.35 54.45 49.61 56.91 56.17 54.28 15.92 6.01 19.36 13.76 SCAFFOLD [78] 36.07 47.36 45.86 59.33 47.15 67.61 82.39 7.79 14.52 43.07 43.85 23.81 45.07 39.79 38.12 12.44 5.58 10.88 9.63 FedProc [202] 47.41 60.84 42.41 66.66 54.33 24.34 43.37 10.15 13.09 22.73 56.94 30.95 56.02 49.94 48.46 19.39 4.91 10.38 11.56 MOON [142] 55.53 68.63 44.83 79.33 62.08 31.28 31.75 14.30 14.45 22.94 54.01 45.10 60.42 58.10 54.40 14.08 7.04 21.39 14.17 FedDyn [1] 59.99 66.42 40.34 81.99 62.18 28.74 56.08 14.36 11.88 27.76 51.40 43.19 60.57 50.71 51.46 14.08 7.86 17.85 13.26 FedOPT [152] 52.67 55.68 60.34 69.33 59.50 59.35 62.62 17.59 15.22 38.69 57.64 39.19 45.92 49.50 48.06 19.38 6.90 18.73 15.00 FedProto [168] 60.35 66.94 58.62 76.00 65.47 43.67 58.08 13.49 13.73 32.24 65.07 36.56 56.98 57.87 54.12 31.01 7.08 23.54 20.54 FedNTD [45] 58.66 69.47 44.83 84.00 64.23 24.15 58.56 18.44 13.68 28.70 64.50 47.47 58.52 53.43 55.98 17.75 7.12 27.97 17.61 Design for Federated Domain Adaptation setting COPA [50] 55.17 67.05 56.55 78.33 64.27 58.93 92.20 10.49 14.90 44.13 71.61 53.74 63.12 56.60 61.26 43.06 6.69 31.26 27.00 KD3A [55] 54.73 70.00 68.61 75.33 67.16 83.91 97.46 14.33 34.03 57.43 76.99 56.91 67.63 55.70 64.30 44.28 8.04 37.08 29.80 Design for Federated Domain Generalization setting COPA [50] 57.32 66.31 48.27 70.00 60.47 33.76 47.32 13.26 15.16 27.37 59.54 35.33 56.67 57.93 52.36 21.22 5.48 19.49 15.39 FedGA [157] 44.28 54.10 51.72 71.33 55.35 58.74 86.92 9.16 14.81 42.40 59.00 35.01 43.20 53.60 47.70 22.24 5.15 10.63 12.67 E.3 Fairness Metrics In federated learning, participants incur varying costs and offer data of unequal value, making fair reward allocation and uniform performance critical. The federated settings we experiment with are outlined in Table 11, and Figure 2 provides an overview of the datasets involved. We capture these with two complementary metrics: 26 Contribution Impact ( C).Rather than using static weights αi, we quantify each client’s real influence on global accuracy by a leave-one-out procedure. Let wbe the assembled global model
https://arxiv.org/abs/2505.16850v1
and withe contribution from client i. Excluding iyields w−i=w−αiwi 1−αi. We measure the average accuracy over all test domains before and after removal, ∆i=¯A−1 |U|X u∈UA−i u, where ¯Ais the mean accuracy and A−i udenotes performance on domain uwithout client i. Normaliz- ing the vector ∆ = (∆ 1, . . . , ∆M)and the weight vector α, we define the contribution score C=∆·α ∥∆∥2∥α∥2, so that higher Cindicates closer alignment between actual impact and nominal weights. Accuracy Consistency ( V).To evaluate how evenly the model serves all clients, we compute the standard deviation of per-domain accuracies: V=s 1 |U|X u∈U Au−¯A2×100% . A smaller Vreflects more uniform performance across heterogeneous client distributions. Caltech(Ca) Webcam(W) Amazon(Am)Dslr(D)Office-Caltech USPS(U)SYN(SY) MNIST(M)SVHN(SV)DigitsArt Painting(AP) Cartoon(C) Sketch(S) Photo(Ph)PACSDslr(D) Amazon(AM)Webcam(W)Office-31 Figure 2: Visualization for Digits [ 214,63,216,80], Office Caltech [ 39], PACS , and Office31 [ 158]. Refer to § 5. F Benchmark Setup F.1 Data Augmentation To improve model robustness under data heterogeneity, we apply standard image transformations on each client’s local data, implemented via PyTorch routines: •RandomCrop(size) : Crop a random patch of the specified size (e.g., 32×32or224×224). 27 •RandomHorizontalFlip(p) : Flip images horizontally with probability p(default p= 0.5). •Normalize(mean, std) : Scale pixel values to zero mean and unit variance using dataset- specific mean andstdvectors. Table 11: Experiments Configuration of different federated scenarios . Image Size is operated after the resize operation. |C|denotes the classification scale. |K|denotes the clients number. Eis the communication epochs for federation. Bmeans the training batch size Scenario Size|C|Network w Rateη|K|EB Label Skew Setting § 5 Cifar-10 32 10 SimpleCNN 1e-2 10 100 64 Fashion-MNIST 32 10 SimpleCNN 1e-2 10 100 64 MNIST 32 10 SimpleCNN 1e-2 10 100 64 Cifar-100 32 100 ResNet-50 1e-1 10 100 64 Tiny-ImageNet 32 200 ResNet-50 1e-2 10 100 64 Domain Skew /Out-Client Shift Settings § 5 Digits 32 10 ResNet-18 1e-2 4/350 16 PACS 224 7 ResNet-34 1e-3 4/350 16 Office Caltech 224 10 ResNet-34 1e-3 4/350 16 Office-Home 224 65 ResNet-34 1e-3 4/350 16 F.2 Implementation Details Optimization and Training Protocol. All methods are evaluated under a common protocol: each client performs U= 10 local SGD epochs per communication round, using a batch size of 64, momentum 0.9, and weight decay 10−5. The learning rate ηand number of global rounds Evary by task and are specified in Table 11. We choose Esuch that further rounds yield negligible improvement across all algorithms. Experiments are implemented in PyTorch, are seeded for reproducibility and run on NVIDIA RTX 3090 GPUs. Model Architectures. For lightweight benchmarks, we adopt a simple CNN with two 5×5 convolutional layers (each followed by 2×2max-pooling), hereafter called SimpleCNN . Larger datasets use ResNet variants [ 53]. Exact layer counts and input resolutions per scenario are detailed in Table 11. Adversary Configurations. When simulating malicious clients, we vary the fraction of adversaries Υ∈ {0.2,0.4}. For data-poisoning attacks ( SymFlip ,PairFlip ), the corruption probability is set toϵ= 0.5. Model-poisoning strategies follow the parameter perturbation schemes described in Section A.2.2. G Future Work Building on the state of the art, we identify several key challenges
https://arxiv.org/abs/2505.16850v1
for next-generation federated systems: •Balancing Generalization and Robustness. Heterogeneous client data drives the need for broad generalization, yet robustness mechanisms must detect and exclude malicious contributions. When benign clients happen to hold atypical data, they risk being misclassified as attackers, degrading overall performance. Future work should develop joint objectives that preserve legitimate diversity while filtering adversarial behavior. •Reconciling Generalization with Fairness. Optimizing for average accuracy across all clients can obscure poor performance on minority distributions, whereas fairness aims for uniform accuracy regardless of data volume or difficulty. Multi-objective formulations that simultaneously maximize mean accuracy and minimize inter-client variance are needed to avoid this “majority wins” trade-off. •Synergies Between Robustness and Fairness. Accurate contribution metrics underpin both robust outlier rejection and fair reward allocation. By integrating anomaly detec- tion into incentive mechanisms, systems can ensure that low-contribution or malicious 28 clients are neither over-rewarded nor under-penalized, fostering both security and long-term participation. •Vertical FL with Generalization, Robustness, and Fairness. In vertical settings, clients hold complementary feature views of the same entities. Aligning heterogeneous feature sets without leaking private attributes remains an open problem. Moreover, attackers may exploit feature inference or label inference attacks, demanding novel defenses such as secure multi- party computation or homomorphic encryption. Finally, feature-level fairness—ensuring no single view dominates the global model—requires new measures of contribution and bias mitigation. •Federating Large Pretrained Models. Fine-tuning massive foundation models on de- centralized data promises strong personalization, but communication costs and intellectual property concerns pose significant barriers. Research should explore parameter-efficient up- dates (e.g., adapters, low-rank updates), encrypted or compressed aggregation protocols, and incentive schemes that protect model ownership while enabling collaborative improvement. •Enabling Reasoning-Centric Personalization. Current federated learning systems largely optimize for classification or regression tasks, while neglecting reasoning capabilities such as multi-hop inference, commonsense logic, or context-aware question answering. These tasks require richer representations and deeper model understanding—often beyond local training signals. Future research should explore reasoning-aware objectives, knowledge distillation across clients, and hierarchical model structures that enable reasoning patterns to emerge across non-iid data distributions. Ad- ditionally, curriculum-based or scaffolded training schedules tailored to client capabilities may allow reasoning modules to be co-learned without centralized supervision. 29
https://arxiv.org/abs/2505.16850v1
arXiv:2505.16855v1 [cs.CL] 22 May 2025Nested Named Entity Recognition as Single-Pass Sequence Labeling Alberto Muñoz-Ortiz,1David Vilares,1Caio Corro,2Carlos Gómez-Rodríguez1 1Universidade da Coruña, CITIC, Spain 2INSA Rennes, IRISA, Inria, CNRS, Université de Rennes, France {alberto.munoz.ortiz, david.vilares, carlos.gomez}@udc.es caio.corro@insa-rennes.fr Abstract We cast nested named entity recognition (NNER) as a sequence labeling task by lever- aging prior work that linearizes constituency structures, effectively reducing the complexity of this structured prediction problem to straight- forward token classification. By combin- ing these constituency linearizations with pre- trained encoders, our method captures nested entities while performing exactly 𝑛tagging ac- tions. Our approach achieves competitive per- formance compared to less efficient systems, and it can be trained using any off-the-shelf sequence labeling library. 1 Introduction Named Entity Recognition (NER) involves identi- fying token spans that refer to specific named en- tities. Traditional approaches use the BIO scheme (Ramshaw and Marcus, 1995), which assigns la- bels to define the beginning (B), inside (I), and outside (O) tokens of each entity. BIO NER - and other BIO tasks - can be naturally addressed using sequence labeling approaches (Ratnaparkhi, 1996; Lafferty et al., 2001; Lample et al., 2016; Yang and Zhang, 2018). However, nested NER (NNER) challenges these approaches, as it involves entities embedded within other entities, as long as their spans do not overlap. For example, in Figure 1, the person mention “ president of the US ” contains the geopolitical named entity “ the US ”. This nested structure breaks the BIO tagging assumption that each token belongs to only one entity. As a re- sult, structured prediction is a better fit for NNER, as it can model interdependent spans and capture hierarchical relationships between nested entities. Prior work has tackled NNER as a hierarchi- cal or layered sequence labeling task, applying token-level tagging multiple times, either detecting outer entities first and then inner ones (Shibuya and Hovy, 2020), or identifying inner entities firstS PER Lincolnwas PER president of ORG the USLincoln was president of the US abs (1,S,PER) (1,S, –) (2,PER, –) (2,PER, –) (3,ORG, –) (1,S, –) rel (1,S,PER) (0,S, –) (1,PER, –) (0,PER, –) (1,ORG, –) (-2,S, –) dyn (1,S,PER) (0,S, –) (1,PER, –) (0,PER, –) (1,ORG, –) (1,S, –) 4tg(→⇒,S,PER)(→⇐,S’, –)(→⇒,PER, –)(→⇐,PER’ , –)(→⇐,ORG, –)(←,S, –)PER PERORG(a) Span annotations (b) Constituent tree representation (c) Linearized labels Figure 1: Illustration of a nested NER annotated sen- tences as (a) spans, (b) a constituent tree, and (c) four ways to linearize that tree into a sequence of la- bels. We compare three depth-based—absolute, rela- tive, dynamic—and one depth-based encoding—tetra- tagging (see §2.1). Binarized non-terminals in the tetra- tagging encoding are marked with ’. and then expanding outward (Wang et al., 2021). Other paradigms classify each possible text span as either an entity or not (Yu et al., 2020; Sohrab and Miwa, 2018; Yuan et al., 2022; Corro, 2023), use sequence-to-sequence techniques (Yan et al., 2021; Tan et al., 2021), or exploit hypergraph repre- sentations (Lu and Roth, 2015; Katiyar and Cardie, 2018; Yan et al., 2023). Alternatively, some methods transform NNER into constituency parsing by mapping samples to constituent trees
https://arxiv.org/abs/2505.16855v1
and relying on parsing algorithms (Finkel and Manning, 2009; Wang et al., 2018; Fu et al., 2021; Lou et al., 2022). More recently, Yang and Tu (2022) applied a pointer-network- based bottom-up constituency parser to NNER, of- fering linear-time decoding. In this context, con- stituency tree linearization—first introduced by Gómez-Rodríguez and Vilares (2018) and later ex- plored in related work (Amini and Cotterell, 2022; Kitaev and Klein, 2020)—reframes the structured prediction task as a sequence labeling problem. This approach achieves competitive performance while improving computational efficiency, as it out- puts the tree in exactly 𝑛tagging actions. This is in contrast with previous sequence-to-sequence models for NNER where the number of tagging ac- tions depends on the number of predicted mentions (Miculicich and Henderson, 2020). Building on this line of work, we present a sim- ple yet effective NNER approach that applies con- stituency tree linearizations within a standard se- quence labeling framework. Our method is single- pass, parser-free, and integrates seamlessly with pretrained encoders. It captures nested structures efficiently and achieves competitive benchmark re- sults, all while requiring minimal implementation effort. Code will be released upon acceptance. 2 NNER Through Constituent Parsing as Sequence Labeling We represent NNER structures using constituent trees and adopt a sequence labeling strategy origi- nally developed for syntactic parsing. Our hypoth- esis is that this formulation aligns well with NNER due to two key properties: (1) the trees are rela- tively shallow, involving fewer output levels than in full parsing,1which eases learning; and (2) the encodings provide full coverage of well-formed trees, regardless of nesting depth. Furthermore, un- like TreeCRF models (Fu et al., 2021), span-based (Lou et al., 2022) or transition-based parsers (Gao et al., 2023), our method avoids complex decoding, enabling linear-time inference— 𝑛tagging actions— with standard architectures. 2.1 Linearizations We use three depth-based (absolute, relative, dy- namic) and one transition-based (tetra-tagging) en- coding strategies. These are illustrated in Figure 1 using a sentence with nested entities. Letw=[𝑤1, 𝑤2, . . . , 𝑤|w|]be an input word sequence, where 𝑤𝑖represents the word at position 1In full constituent parsing, large gaps between nonter- minals have been shown to challenge sequence tagging ap- proaches (Vilares et al., 2019).Dataset Enc.# Labels (𝑛, 𝑐, 𝑢)Missing (𝑛, 𝑐, 𝑢)% dev % test ACE2004abs (6,11,8) (0,1,0) 100.00 99.96 dyn (10,11,8) (0,1,0) 100.00 99.96 rel (10,11,8) (0,1,0) 100.00 99.96 4tg (8,22,8) (1,6,1) 100.00 99.96 ACE2005abs (7,29,9) (0,3,1) 99.88 99.98 dyn (9,29,9) (0,3,1) 99.88 99.98 rel (10,29,9) (0,3,1) 99.88 99.98 4tg (8,58,9) (1,6,1) 99.88 99.98 GENIAabs (4,12,13) (0,0,0) 100.00 100.00 dyn (8,12,13) (0,0,0) 100.00 100.00 rel (8,12,13) (0,0,0) 100.00 100.00 4tg (5,22,13) (0,1,0) >99.99 >99.99 NNEabs (7,264,401) (0,14,37) 99.93 99.95 dyn (11,264,401) (0,14,37) 99.93 99.95 rel (10,264,401) (0,14,37) 99.93 99.95 4tg (13,399,401) (0,18,37) 99.93 99.95 Table 1: Coverage statistics across datasets and encod- ing strategies. # Labels indicates the number of unique labels in the training set. Missing shows the number of labels in dev/test not seen during training. % dev/test indicates the percentage of dev/test labels covered by the training vocabulary. 𝑖. We define a label 𝑙𝑖=(𝑛𝑖, 𝑐𝑖, 𝑢𝑖)for positions 1≤𝑖≤|w|−1, where:
https://arxiv.org/abs/2505.16855v1
(1) 𝑛𝑖is an integer indi- cating the number of common ancestors between words 𝑤𝑖and𝑤𝑖+1, (2)𝑐𝑖is the non-terminal sym- bol at their lowest common ancestor, and (3) 𝑢𝑖 denotes the unary branch for 𝑤𝑖, if one exists. In the absolute encoding ( abs),𝑛abs 𝑖represents the number of common ancestors between 𝑤𝑖and 𝑤𝑖+1. While absoften suffers from a large and sparse label space in full constituent parsing due to the depth and complexity of syntactic trees, we expect it to remain compact and easier to learn in NNER, where structures are shallower. In the relative encoding ( rel), in order to re- duce the label set, 𝑛rel 𝑖represents the difference in the number of common ancestors 𝑛abs 𝑖−𝑛abs 𝑖−1. Combining these two strategies, the dynamic encoding ( dyn)selects, at each position 𝑖, the most suitable encoding—absolute or relative—for rep- resenting the relation between tokens 𝑤𝑖and𝑤𝑖+1. It defaults to the relative encoding and switches to absolute when two conditions are met: (1) the relative value 𝑛rel 𝑖≤−2, indicating a sharp drop in tree depth with empty non-terminal levels that must be filled in subsequent steps; and (2) the corre- sponding absolute value 𝑛abs 𝑖≤3, ensuring the con- stituent node is close to the topmost tree level and the absolute label remains compact, avoiding spar- sity and improving learnability. The dynamic en- coding uses the absolute function only after abrupt structural changes. This affects few labels—0.75% in ACE2004, 0.73% in ACE2005, 0.15% in GE- NIA, and 1.16% in NNE—but many sentences: 14.64%, 12.45%, 3.86%, and 20.20%, respectively. In addition to depth-based encodings, we also consider the tetra-tagging ( 4tg)linearization in- troduced by Kitaev and Klein (2020). Tetra-tagging is a left-corner transition-based parsing algorithm for binary trees, which can be adapted to sequence labeling. It assigns two tags per word: one for the word itself and one for the adjacent fencepost (i.e., the boundary between consecutive words). Each tag encodes whether the word ( →,←) or the lowest common ancestor spanning the fencepost ( ⇒,⇐) is a left or right child. To fit our sequence labeling framework, we pair each word and fencepost tag into a single label. We reuse the notation 𝑙𝑖, but in this case 𝑠𝑖is refactored to store the two tags associated to 𝑤𝑖. Nonterminal symbols and unary chains are added aside the tags as in depth-based encodings. Label Vocabulary As entity spans are encoded with a finite set of labels observed during training, the model may face unseen labels at test time. Still, as shown in Table 1, the proposed encodings cover almost all entities in the development and test sets. 3 Experiments We present our experimental setup and results. Setup We use the MaChAmp framework (van der Goot et al., 2021) to train a mul- titask transformer-based model.2Specifically, we use roberta-large (Liu et al., 2019) and bert-large-uncased (Devlin et al., 2019) as shared encoders for the ACE and NNE datasets, and biobert-large-cased-v1.1 (Lee et al., 2020) for GENIA. We include RoBERTa as a high-performing option, and BERT for compari- son, since it is the most commonly used model in related work. We
https://arxiv.org/abs/2505.16855v1
add three task-specific linear clas- sification heads to predict each atomic component of the label(𝑛𝑖, 𝑐𝑖, 𝑢𝑖). We adopt the default hyper- parameters provided in the MaChAmp repository, as our primary goal is to show these models per- form well in off-the-shelf sequence labeling setups with simple plug-and-play scripts. In particular, for encoding and decoding, we use the scripts provided 2Although the transformer architecture has quadratic com- plexity, we follow common practice and refer to the rest of the system’s complexity.Model Comp. ACE2004 ACE2005 GENIA NNE Other approaches (𝑂(𝑛)) W21 𝑂(𝑛)86.1 84 .7 78.7 – Y22 𝑂(𝑛)86.9 85 .5 78 .2 – Other approaches (higher complexity) S20 𝑂(𝑛2)85.8 84 .3 77 .4 – C23 𝑂(𝑛2)86.2 84 .8 78 .3 – F21 𝑂(𝑛3)86.6±0.385.4±0.178.2±0.1 – L22 𝑂(𝑛4)87.9 86 .9 78 .4 94 .6 This work (𝑂(𝑛)) RoBERTa (ACE and NNE) and BioBERT abs 𝑂(𝑛)86.8±0.385.0±0.376.6±0.394.1±0.3rel 𝑂(𝑛)86.9±0.385.5±0.375.1±0.794.3±0.1dyn 𝑂(𝑛)87.7±0.286.0±0.375.3±0.594.3±0.24tg 𝑂(𝑛)86.1±0.584.7±0.574.6±0.894.1±0.1 BERT (for comparison) abs 𝑂(𝑛)86.1±0.383.8±0.4 – 94.0±0.1rel 𝑂(𝑛)87.0±0.485.2±0.3 – 94.1±0.1dyn 𝑂(𝑛)87.0±0.185.2±0.5 – 94.2±0.04tg 𝑂(𝑛)86.0±0.584.7±0.2 – 94.1±0.1 Table 2: F1 scores for different datasets and encodings, a comparison to prior work, and computational com- plexities. Highest F1 among 𝑂(𝑛)models is in bold. by the CoDeLin library (Roca et al., 2023). This li- brary provides homogeneous post-processing to produce trees from ill-formed label sequences; more details about the post-processing are in Ap- pendix D. Datasets We train and evaluate our models on four popular English NNER benchmarks: GENIA (Kim et al., 2003), ACE 2004 (Doddington et al., 2004), ACE 2005 (Walker et al., 2006), and NNE (Ringland et al., 2019). For ACE and GENIA, we preprocess the datasets following Shibuya and Hovy (2020). Information about the datasets is shown in Appendix Table 6. Notably, most entities are flat and short and highly nested spans are rare. Metrics We evaluate our models in terms of pre- cision, recall, and F1 scores based on strict entity boundaries, requiring both correct spans and types. 3.1 Results Table 2 reports precision, recall, and F1 scores for our sequence labeling models compared to previ- ous work on NNER. All results are averaged over five runs to provide robust performance estimates (standard deviations are also reported). Our approach outperforms both comparable and more complex methods on the ACE and NNE datasets. However, this is not the case for GE- NIA, where the results—particularly recall—are ACE2004 ACE2005 GENIA NNE Prec. Rec. Prec. Rec. Prec. Rec. Prec. Rec. abs084.4 87.2 83.1 86.7 78.9 79.8 88.8 92.3 1 78.9 85.9 71.0 84.3 44.4 43.5 90.6 95.4 ≥271.6 63.6 61.3 37.4 50.0 25.0 85.2 83.2 rel0 83.5 85.8 81.6 85.9 77.7 76.1 87.8 91.7 1 78.7 87.6 72.7 85.8 58.1 41.7 90.1 95.4 ≥2 62.7 62.3 59.6 40.5 0.0 0.0 86.4 82.4 dyn0 84.0 86.8 82.6 86.5 78.1 76.3 88.1 91.9 179.4 88.1 75.3 86.9 51.7 42.9 90.4 95.6 ≥2 68.0 66.2 58.6 49.6 0.0 0.0 85.3 82.7 4tg0 83.1 86.0 81.2 85.7 78.2 77.8 89.3 92.2 1 73.6 86.8 73.4 85.2 32.5 45.6 90.8 95.5 ≥2 53.9 59.6 59.2 58.8 3.8 10.0 83.5 84.9 Table 3: Precision and recall by encoding and depth per dataset for entities at
https://arxiv.org/abs/2505.16855v1
different depths. Best values for each dataset and length shown in bold. lower. Although the dynencoding predominantly uses the relative function—as most entities are flat or shallow—it outperforms the other encodings on every dataset except GENIA, effectively lever- aging the best of absandrellinearizations. Al- though the absolute function is rarely used in the dynamic encoding, it yields small but consistent F1 improvements of 0.74% for ACE2004, 0.39% for ACE2005, 0.22% for GENIA, and 0.02% for NNE. As more than 99% of entities in GENIA are flat or singly nested, the dynandrelencodings represent the entities almost identically. Tetra-tagging is the worst-performing encoding in all but one setup. To better understand how linearizations handle nested entities, we evaluate each encoding by an- alyzing model performance across different entity depth and length, using RoBERTa and BioBERT. Results by depth Table 3 presents precision and recall by depth for each encoding and dataset. Since the number of predicted entities may vary across runs, we follow Shibuya and Hovy (2020) and report the precision results from the run whose F1 is closest to the average, and the average recall over five runs. absperforms best for flat entities on all datasets, dynleads for singly nested entities in ACE2004 and ACE2005, relin GENIA, and 4tganddyntie in NNE. For entities nested two or more times, absperforms best in GENIA and achieves the highest precision on the ACE datasets, but is outperformed in recall except in GENIA. In NNE, 4tgachieves the highest recall. Results by entity length Table 2 presents pre- cision and recall by span length. Among the en- codings, relanddynperform best on single-tokenACE2004 ACE2005 GENIA NNE Prec. Rec. Prec. Rec. Prec. Rec. Prec. Rec. abs1 84.8 86.4 83.6 86.4 78.4 77.9 93.2 93.2 2–4 88.1 87.1 79.2 82.2 74.8 73.9 89.8 90.3 5–9 76.8 78.2 74.1 76.8 69.3 76.7 85.0 73.6 ≥1070.5 71.0 67.9 61.5 81.8 75.0 80.0 20.0 rel189.0 88.0 86.9 85.1 79.5 76.6 94.2 93.4 2–4 89.5 86.3 82.5 82.4 74.1 68.3 91.0 89.6 5–9 77.8 73.2 75.8 70.8 68.2 67.7 85.5 68.3 ≥10 55.3 53.4 63.7 60.9 65.3 53.3 44.4 20.0 dyn1 88.9 88.2 86.4 86.2 79.8 77.2 94.0 93.4 2–4 89.8 86.7 82.4 83.0 73.9 66.8 90.7 89.8 5–9 77.4 75.4 76.7 72.6 69.2 67.7 84.4 69.9 ≥10 64.5 65.1 69.3 67.1 65.2 50.0 50.0 20.0 4tg1 88.3 87.6 86.1 85.8 78.7 77.8 93.9 93.5 2–4 87.8 85.5 79.8 80.8 72.2 70.4 89.9 91.0 5–9 72.6 74.6 72.2 71.2 56.7 68.9 76.9 72.0 ≥10 48.3 57.8 52.6 62.1 16.8 58.3 18.2 30.0 Table 4: Precision and recall by encoding and length for entities of different span lengths. Best values for each dataset and length shown in bold. spans and those of 2–4 tokens. Results for 5–9 token spans are mixed, while absconsistently out- performs the others on longer spans ( ≥10 tokens), except in ACE2005, where it is surpassed by dyn. 4tgperforms well on short spans—particularly in NNE—but its performance drops sharply as span length increases. These trends suggest that dynis well-suited for frequent, shorter entities, while abs offers greater robustness on longer ones. 4
https://arxiv.org/abs/2505.16855v1
Conclusion We cast nested NER for the first time as a single- pass sequence labeling task by first transforming NNER annotations into constituent trees, and then leveraging the linearization options these trees enable. In particular, we explored both depth- based (absolute, relative and dynamic encodings) and transition-based encodings (tetra-tagging) pre- viously proposed for this type of tree structure. Our experiments show that this approach mod- els nested structures effectively, with competitive performance and no need for complex architec- tures—remaining lightweight and compatible with standard sequence labeling tools. Among the explored encodings, the dynamic variant—which selects between absolute and rel- ative schemes based on local structural cues— consistently achieves the highest F1 across datasets, except for GENIA. The transition-based tetra- tagging approach remains competitive on flat and short entities but degrades on longer spans, where depth-based strategies prove more effective. Limitations Due to the limited availability of freely accessible NNER datasets, we focus on the most widely used English benchmarks, including both public and proprietary ones. These datasets are commonly used in prior work and provide a solid basis for comparison. While our approach is evaluated on these established datasets, it’s important to note that results may vary across benchmarks, as factors such as annotation scheme, entity types, nesting depth, domain, and language can influence performance. Acknowledgements This work was funded by SCANNER- UDC (PID2020-113230RB-C21) funded by MICIU/AEI/10.13039/501100011033; Xunta de Galicia (ED431C 2024/02); GAP (PID2022-139308OA-I00) funded by MI- CIU/AEI/10.13039/501100011033/ and by ERDF, EU; Grant PRE2021-097001 funded by MICIU/AEI/10.13039/501100011033 and by ESF+ (predoctoral training grant associated to project PID2020-113230RB-C21); LATCH- ING (PID2023-147129OB-C21) funded by MICIU/AEI/10.13039/501100011033 and ERDF; and Centro de Investigación de Galicia “CITIC”, funded by the Xunta de Galicia through the collaboration agreement between the Consellería de Cultura, Educación, Formación Profesional e Universidades and the Galician universities for the reinforcement of the research centres of the Galician University System (CIGUS). This research project was made possible through the access granted by the Galician Supercomputing Center (CESGA) to its supercomputing infrastruc- ture. The supercomputer FinisTerrae III and its per- manent data storage system have been funded by the NextGeneration EU 2021 Recovery, Transfor- mation and Resilience Plan, ICT2021-006904, and also from the Pluriregional Operational Programme of Spain 2014-2020 of the European Regional De- velopment Fund (ERDF), ICTS-2019-02-CESGA- 3, and from the State Programme for the Promo- tion of Scientific and Technical Research of Excel- lence of the State Plan for Scientific and Technical Research and Innovation 2013-2016 State subpro- gramme for scientific and technical infrastructures and equipment of ERDF, CESG15-DE-3114. Caio Corro has received funding from the French Agence Nationale pour la Recherche under grant agreement InExtenso ANR-23-IAS1-0004and SEMIAMOR ANR-23-CE23-0005. References Afra Amini and Ryan Cotterell. 2022. On parsing as tagging. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 8884–8900, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. Caio Corro. 2023. A dynamic programming algorithm for span-based nested named-entity recognition in 𝑜(𝑛2). InProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers) , pages 10712–10724, Toronto, Canada. Association for
https://arxiv.org/abs/2505.16855v1
Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers) , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extrac- tion (ACE) program – tasks, data, and evaluation. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC‘04) , Lisbon, Portugal. European Language Resources As- sociation (ELRA). Jenny Rose Finkel and Christopher D. Manning. 2009. Nested named entity recognition. In Proceedings of the 2009 Conference on Empirical Methods in Natu- ral Language Processing , pages 141–150, Singapore. Association for Computational Linguistics. Yao Fu, Chuanqi Tan, Mosha Chen, Songfang Huang, and Fei Huang. 2021. Nested named entity recogni- tion with partially-observed TreeCRFs. Proceedings of the AAAI Conference on Artificial Intelligence , 35(14):12839–12847. Nan Gao, Bowei Yang, Yongjian Wang, and Peng Chen. 2023. Ntam: A new transition-based attention model for nested named entity recognition. In CCF Interna- tional Conference on Natural Language Processing and Chinese Computing , pages 273–286. Springer. Carlos Gómez-Rodríguez and David Vilares. 2018. Constituent parsing as sequence labeling. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 1314–1324, Brussels, Belgium. Association for Computational Linguistics. Arzoo Katiyar and Claire Cardie. 2018. Nested named entity recognition revisited. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers) , pages 861–871, New Orleans, Louisiana. As- sociation for Computational Linguistics. J.-D. Kim, T. Ohta, Y . Tateisi, and J. Tsujii. 2003. Ge- nia corpus—a semantically annotated corpus for bio- textmining. Bioinformatics , 19(suppl_1):i180–i182. Nikita Kitaev and Dan Klein. 2020. Tetra-tagging: Word-synchronous parsing with linear-time inference. InProceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics , pages 6255– 6261, Online. Association for Computational Lin- guistics. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth In- ternational Conference on Machine Learning (ICML ’01), ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. InProceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies , pages 260–270, San Diego, California. Association for Computational Linguistics. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: A pre-trained biomedical language representation model for biomedical text mining. Bioinformatics , 36(4):1234–1240. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692 . Chao Lou,
https://arxiv.org/abs/2505.16855v1
Songlin Yang, and Kewei Tu. 2022. Nested named entity recognition as latent lexicalized con- stituency parsing. In Proceedings of the 60th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 6183–6198, Dublin, Ireland. Association for Computational Lin- guistics. Wei Lu and Dan Roth. 2015. Joint mention extraction and classification with mention hypergraphs. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing , pages 857–867, Lisbon, Portugal. Association for Computational Lin- guistics. Lesly Miculicich and James Henderson. 2020. Partially- supervised mention detection. In Proceedings of the Third Workshop on Computational Models of Ref- erence, Anaphora and Coreference , pages 91–98, Barcelona, Spain (online). Association for Computa- tional Linguistics.Lance Ramshaw and Mitch Marcus. 1995. Text chunk- ing using transformation-based learning. In Third Workshop on Very Large Corpora . Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Conference on Empiri- cal Methods in Natural Language Processing . Nicky Ringland, Xiang Dai, Ben Hachey, Sarvnaz Karimi, Cecile Paris, and James R. Curran. 2019. NNE: A dataset for nested named entity recognition in English newswire. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics , pages 5176–5181, Florence, Italy. Asso- ciation for Computational Linguistics. Diego Roca, David Vilares, and Carlos Gómez- Rodríguez. 2023. A system for constituent and dependency tree linearization. In Proceedings of XoveTIC Conference (XoveTIC ’22) , volume 14 ofKalpa Publications in Computing , pages 83–87. EasyChair. Takashi Shibuya and Eduard Hovy. 2020. Nested named entity recognition via second-best sequence learning and decoding. Transactions of the Association for Computational Linguistics , 8:605–620. Mohammad Golam Sohrab and Makoto Miwa. 2018. Deep exhaustive model for nested named entity recognition. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 2843–2849, Brussels, Belgium. Associa- tion for Computational Linguistics. Zeqi Tan, Yongliang Shen, Shuai Zhang, Weiming Lu, and Yueting Zhuang. 2021. A sequence-to-set net- work for nested named entity recognition. In Pro- ceedings of the Thirtieth International Joint Con- ference on Artificial Intelligence (IJCAI-21) , pages 3936–3942. International Joint Conferences on Arti- ficial Intelligence Organization. Main Track. Rob van der Goot, Ahmet Üstün, Alan Ramponi, Ibrahim Sharaf, and Barbara Plank. 2021. Massive choice, ample tasks (MaChAmp): A toolkit for multi- task learning in NLP. In Proceedings of the 16th Conference of the European Chapter of the Associa- tion for Computational Linguistics: System Demon- strations , pages 176–197, Online. Association for Computational Linguistics. David Vilares, Mostafa Abdou, and Anders Søgaard. 2019. Better, faster, stronger sequence tagging con- stituent parsers. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers) , pages 3372–3383, Minneapolis, Minnesota. Association for Computational Linguistics. Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium . Bailin Wang, Wei Lu, Yu Wang, and Hongxia Jin. 2018. A neural transition-based model for nested mention recognition. In Proceedings of the 2018 Conference on Empirical Methods in
https://arxiv.org/abs/2505.16855v1
Natural Language Process- ing, pages 1011–1017, Brussels, Belgium. Associa- tion for Computational Linguistics. Yiran Wang, Hiroyuki Shindo, Yuji Matsumoto, and Taro Watanabe. 2021. Nested named entity recog- nition via explicitly excluding the influence of the best path. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers) , pages 3547–3557, Online. Association for Computa- tional Linguistics. Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, and Xipeng Qiu. 2021. A unified generative framework for various NER subtasks. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) , pages 5808–5822, Online. Association for Computational Linguistics. Yukun Yan, Bingling Cai, and Sen Song. 2023. Nested named entity recognition as building local hyper- graphs. Proceedings of the AAAI Conference on Artificial Intelligence , 37(11):13878–13886. Jie Yang and Yue Zhang. 2018. NCRF++: An open- source neural sequence labeling toolkit. In Proceed- ings of ACL 2018, System Demonstrations , pages 74–79, Melbourne, Australia. Association for Com- putational Linguistics. Songlin Yang and Kewei Tu. 2022. Bottom-up con- stituency parsing and nested named entity recogni- tion with pointer networks. In Proceedings of the 60th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 2403–2416, Dublin, Ireland. Association for Compu- tational Linguistics. Juntao Yu, Bernd Bohnet, and Massimo Poesio. 2020. Named entity recognition as dependency parsing. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics , pages 6470– 6476, Online. Association for Computational Lin- guistics. Zheng Yuan, Chuanqi Tan, Songfang Huang, and Fei Huang. 2022. Fusing heterogeneous factors with triaffine mechanism for nested named entity recog- nition. In Findings of the Association for Compu- tational Linguistics: ACL 2022 , pages 3174–3186, Dublin, Ireland. Association for Computational Lin- guistics. A Precision and Recall Results We now show in Table 5 the precision and recall corresponding to the evaluation in Table 2 of the main text.B Dataset information Table 6 shows statistics about each dataset, includ- ing data on the depth and length of entities. C Data coverage The absolute, relative, and dynamic encodings are able to represent all nested entity structures in the datasets perfectly. The only unrecoverable cases under these encodings stem from annotation incon- sistencies—such as crossing spans in ACE2004 or inverted boundaries in NNE—and not from encod- ing limitations. The tetra-tagging encoding ( 4tg), while equally expressive in theory, exhibits a small number of unrecoverable entities due to decoding ambiguities in the CoDeLin implementation rather than the encoding design itself. Table 7 reports the maximum recall for each encoding and dataset, with the number of unrecoverable entities shown in parentheses. D Post-processing Rather than imposing strict constraints during pre- diction, parsing as sequence labeling ensures well- formedness by applying simple heuristics to ill- formed label sequences. In our setup we apply the heuristics provided by CoDeLin (Roca et al., 2023), i.e., (1) in depth-based encodings, if multiple labels are
https://arxiv.org/abs/2505.16855v1
assigned to the same non-terminal node, we retain only the first; (2) if no label is assigned to a node, it is removed from the tree; (3) indexes out of range (e.g. -7 in the relative encoding when there are fewer than 7 levels to go up) are changed to the nearest legal index; and (4) in tetra-tagging, if the label sequence specifies an invalid transition, the transition is skipped. Words that remain unattached due to skipped transitions are attached to the lowest nonterminal in the rightmost tree spine. E Model Size and Budget We fine-tune three models: BERT Large (340M pa- rameters), BioBERT Large (340M), and RoBERTa Large (355M). Each model is trained with 4 en- coding strategies, across 4 datasets, and using 5 random seeds, resulting in a total of 240 training runs. Training was performed on a GPU cluster using NVIDIA A100 GPUs (40GB), with each run executed on a single GPU. The cumulative training time is estimated at approximately 300–360 GPU hours. ACE2004 ACE2005 GENIA NNE Model Comp. Prec. Rec. Prec. Rec. Prec. Rec. Prec. Rec. Other approaches (𝑂(𝑛)) W21 𝑂(𝑛)85.4 86 .7 84 .2 85 .3 78 .2 79.2 – – Y22 𝑂(𝑛)86.6 87 .3 84 .6 86.4 78.1 78 .3 – – Other approaches (higher complexity) S20 𝑂(𝑛2)85.9 85 .7 83 .8 84 .9 77 .8 76 .9 – – C23 𝑂(𝑛2)87.4 85 .0 84 .4 85 .3 79 .3 77 .3 – – F21 𝑂(𝑛3)86.7±0.486.5±0.484.5±0.486.4±0.278.2±0.778.2±0.8 – – L22 𝑂(𝑛4)87.4 88 .4 86 .0 87 .9 78 .4 78 .5 94 .3 95 .0 This work (𝑂(𝑛)) RoBERTa (ACE and NNE) and BioBERT abs 𝑂(𝑛)86.8±0.586.8±0.184.1±0.685.9±0.477.6±0.475.6±0.394.0±0.394.2±0.3rel 𝑂(𝑛)87.4±0.486.4±0.385.3±0.685.7±0.378.4±0.772.1±1.894.4±0.094.1±0.1dyn 𝑂(𝑛)88.0±0.387.3±0.285.6±0.486.4±0.578.5±0.572.5±1.394.3±0.294.2±0.24tg 𝑂(𝑛)85.8±1.186.4±0.184.0±0.585.5±0.575.2±1.174.1±0.994.0±0.294.3±0.1 BERT (for comparison) abs 𝑂(𝑛)85.9±0.486.2±0.582.7±0.485.0±0.4– – 94.1±0.194.0±0.1rel 𝑂(𝑛)87.9±0.486.0±0.484.9±0.785.5±0.0– – 94.4±0.193.8±0.1dyn 𝑂(𝑛)87.9±0.186.2±0.284.9±0.585.4±0.7- - 94.4±0.094.0±0.14tg 𝑂(𝑛)86.0±0.786.0±0.383.6±0.285.8±0.3– – 94.1±0.194.1±0.1 Table 5: Performance metrics for each dataset and encoding. Among 𝑂(𝑛)methods, the top values for each dataset are in bold . Characteristic ACE2004 ACE2005 GENIA NNE Topic News News Biomedical News Sentences 7 762 9 335 18 549 49 211 Ent. Types 7 7 5 114 Total Entities 27 751 30 956 57 063 279 796 Avg. Depth 1.34 1.29 1.10 1.88 Med. Depth 1 1 1 2 Max. Depth 6 6 4 6 Percentage of Entities at Each Nesting Depth 0 71.77 75.81 90.29 36.45 1 22.98 19.98 9.45 42.73 2 4.61 3.73 0.25 17.83 3 0.55 0.44 0.00 2.83 4 0.08 0.02 0.00 0.15 5 0.01 0.01 0.00 0.00 Percentage of Entities with Different Span Lengths 1 Token 42.46 46.68 52.50 61.77 2–4 Tokens 39.49 37.25 42.42 35.06 5–9 Tokens 12.51 10.94 4.74 3.10 ≥10Tokens 5.54 5.13 0.35 0.06 Table 6: Dataset details, including the percentage of entities at each nesting depth and their distribution by length. Nesting depth refers to the number of hierarchi- cal levels an entity is embedded within another, with 0 indicating a flat entity. ACE2004 ACE2005 GENIA NNE abs 99.97 (8) 100.00 100.00 >99.99 (1) rel 99.97 (8) 100.00 100.00 >99.99 (1) dyn 99.97 (8) 100.00 100.00 >99.99 (1) 4tg 99.97 (9) 99.99 (2) 100.00 99.96 (118) Table 7: Maximum possible recall percentages by encod-
https://arxiv.org/abs/2505.16855v1
Comparative analysis of subword tokenization approaches for Indian languages Sudhansu Bala Dasa,∗ , Samujjal Choudhurya , Tapas Kumar Mishraa , Bidyut Kr. Patrab aNational Institute of Technology(NIT), Rourkela, Odisha, India b Indian Institute of Technology (IIT), Varanasi, Uttar Pradesh, India Abstract Tokenization is the act of breaking down text into smaller parts, or tokens, that are easier for machines to process. This is a key phase in machine translation (MT) models. Subword tokenization enhances this process by breaking down words into smaller subword units, which is especially beneficial in languages with complicated morphology or a vast vocabulary. It is useful in capturing the intricate structure of words in Indian languages (ILs), such as prefixes, suffixes, and other morphological variations. These languages frequently use agglutinative structures, in which words are formed by the combination of multiple morphemes such as suffixes, prefixes, and stems. As a result, a suitable tokenization strategy must be chosen to address these scenarios. This paper examines how different subword tokenization techniques, such as SentencePiece, Byte Pair Encoding (BPE), and WordPiece Tokenization, affect ILs. The effectiveness of these subword tokenization techniques is investigated in statistical, neural, and multilingual neural machine translation models. All models are examined using standard evaluation metrics, such as the Bilingual Evaluation Understudy (BLEU) score, TER, METEOR, CHRF, RIBES, and COMET. Based on the results, it appears that for the majority of language pairs for the Statistical and Neural MT models, the SentencePiece tokenizer continuously performed better than other tokenizers in terms of BLEU score. However, BPE tokenization outperformed other tokenization techniques in the context of Multilingual Neural Machine Translation model. The results show that, despite using the same tokenizer and dataset for each model, translations from ILs to English surpassed translations from English to ILs. Keywords: Machine Translation, Indic Languages, Tokenization, Evaluation Metrics ∗Corresponding author Email addresses: baladas.sudhansu@gmail.com (Sudhansu Bala Das), 1. Introduction Language is a constantly evolving medium for communicating ideas and emotions. It not only serves as a standard of expressing our emotions and thoughts, but also takes communication a step further by providing context and cultural significance in addition to words. Since languages differ from region to region, intercommunication among people becomes challenging in the absence of interpreters. This is when natural language processing (NLP) arrives into play. NLP is a branch of artificial intelligence that in vestigates the relationship between machines and human language. The overall objective is to allow machines to comprehend, translate, and produce human -like text, allowing for more effective interactions between people who speak different languages. Machine Translation (MT) arises as an important application in the field of NLP . At its core, machine translation (MT) uses algorithms for computation to automatically convert text or speech between languages. The importance of MT stems not only from the way it can help interaction between cultures, but also from its capacity to bring down linguistic obstacles in real -time, encouraging international communication, awareness, and sharing of information [13]. As MT evolves, the difficulties it encounters become more diverse and complex. Language complexities, such as sentence structures, idiomatic phrases, and cultural
https://arxiv.org/abs/2505.16868v1
differences, present significant challenges in producing precise and relevant context translations. In the field of MT, statistical machine translati on (SMT) has served an important role in breaking down obstacles associated with language [14]. SMT uses statistical models developed using large datasets to generate probabilistic predictions regarding text translation from one language to another. Unlike rule -based systems [15], which use predetermined rules of grammar, SMT acquires patterns through large corpora, enabling it to adjust to the complexity of natural language. The mathematical framework of SMT enables the system to make choices according to the likelihood of particular translation choices [16]. The statistical method improves SMT’s adaptability, rendering it efficient over a wide range of language pairs and subject areas. In the realm of translation problems, SMT faces significant challenges in successfully handling and comprehending input text. Despite this, the adaptability of these methods to new languages or domains is limited, as well as the creation and upkeep of rules or models may necessitate a significant amount of manual labor. On th e other hand, Neural Machine Translation (NMT) is a paradigm shift that uses neural networks to directly learn the translation mapping from input to output sequences [17] [47]. In contrast to SMT, NMT models translate more fluently being contextually accurate because of their encoder -decoder architecture, which captures complex relationships among words and phrases. Furthermore, advances in NMT have resulted in the development of Multilingual Neural Machine Translation (MNMT) [18]. Researchers’ interest in multilingual neural machine translation (MNMT) has grown over the past few years. One of the most significant benefits of MNMT is the ability to translate multiple language pairs using only one model [19]. While SMT, NMT, and MNMT have different underlying architectures and techniques, they all use tokenization as a preprocessing step. Tokenization is the process of breaking down input text into smaller units, that include words or subwords so that translation models can process them more efficiently. Proper tokenization is critical for accurately capturing linguistic nuances in Indian languages (ILs), which are known for their complicated structure and diverse morphological features. Specialized tokenization techniques designed for the unique characteristics of ILs can help enhancing translation precision as well as fluency [22]. Exploring and implementing appropriate tokenization methods is therefore necessary for producing high -quality machine translations for Indian languages. Tokenization aids in the identification of specific morphemes inside words, which are necessary for comprehending the structure as well as the significance of words in the framework of morphological analysis [24]. By properly tokenizing the text, we can distinguish suffixes, prefixes, root phrases, and various other morphological components, enabling more accurate interpretation and analysis [23]. Hence, tokenization is an essential component of the morphological analysis process, as it allows for the detection and evaluation of distinct morphemes within words, which is required for machine translation tasks. Hence, the main objective of this paper is to analyze and compare the efficacy of various subword tokenization methods in the scenario of statistical, neural, and multilingual models for 11 Indian languages. The investigation focuses on three different subword tokenization methods:
https://arxiv.org/abs/2505.16868v1
Byte Pair Encoding (BPE) [9], WordPiece [25], and SentencePiece [21]. These methods have been chosen due to their applicability and possible influence on machine translation efficiency. The evaluation of these techniques attempts to identify the advantages and disadvantages of each for handling text data written in Indian languages. This paper aims to offer important insights into the best tokenization strategy for Indian language processing tasks, especially in the context of machine translation (MT), through experimentation and analysis. Hence, the main contributions of this paper are as follows: 1. As a first attempt ever, to the best of our knowledge, this paper examines the effects of different subword tokenization methods, such as WordPiece [25], SentencePiece [21], and Byte Pair Encoding (BPE) [9], on 11 Indian languages. 2. The paper explores the significance of tokenization in various machine translation models, including SMT, NMT, and MNMT, and provides comparisons as well as insights into their efficacy. 3. The paper compares the efficacy of different MT models as well as tokenization methods across 11 Indian languages using a wide range of MT metrics such as BLEU [1], RIBES [2], METEOR [3], TER [4], CHRF[5] and COMET [6]. The remaining part of this paper is organized as follows: Section 2 describes and gives a brief overview of the tokenization and some notable work related to it. Section 3 discusses the different tokenization and their impact on Statistical, Neural, and Multilingual Models. Section 4 describes the experimental framework, and Section 5 presents a few evaluation metrics. Section 6 discusses the results and Section 7 presents conclusions. 2. Tokenization Early studies emphasize the significance of utilizing a tokenization algorithm since tokenization constitutes one of the first phases in any information retrieval or natural language processing system. Over the last few decades, tokenization has undergone numerous phases. One of the old tokenization techniques is whitespace tokenization, which divides text into sections using whitespace characters like tabs and spaces. Even though whitespace tokenization is easy to use and straightforward, it might not be a dequate for tasks requiring more precise token granularity or for languages with intricate morphological structures.Traditional tokenization methods divide text into words according to whitespace or punctuation marks [29]. However, this method may result in out -of-vocabulary (OOV) words and impair translation quality in languages with complex morphology or huge vocabularies. More advanced methods, like word -based tokenization [28], have been developed by researchers to overcome the drawbacks of whitespace t okenization. Word tokenization algorithms, which frequently rely on dictionaries or language specific rules, seek to divide text into meaningful word units. The NLP community, for instance, makes extensive use of the word tokenize function in the NLTK library [26] and the word tokenize method in the spaCy library [27]. Several notable studies in the literature concentrate on improving these subword tokenization techniques. Ding et al [30], for instance, investigate how the quantity of BPE merges affects the effectiveness of machine translation. Provilkov et al [31] have suggest a drop -out technique for eac h merge step of BPE that enhances machine translation effectiveness by breaking the deterministic
https://arxiv.org/abs/2505.16868v1
nature of the algorithm. According to Bostrom and Durrett [32], there are better methods for language pretraining than BPE because it does not make good use of the vocabulary space. To evaluate the impact of incorporating subword tokens, Nayak et al. [33] compare the activations of BERT’s attention layers with WordPiece and word -level tokenization. They discover that modeling semantically significant connection s among words is hampered by a vocabulary that relies heavily on frequency -based character combinations. Furthermore, representations derived from tokenization utilizing word occurrence data rely more on frequency data than semantics [34]. Banerjee et al [35] have integrated BPE with a commercially available morphological segmenter in translations from Bengali and Hindi against English. In addition to statistical segmentation techniques, employed a retrained version of the linguistically inspired segmentation model for Arabic. Recent developments in deep learning have further extended the range of tokenization techniques. Subword tokenization as well as learned embeddings are used by models such as BERT [37] and GPT [38] to effectively gather contextual information. By integrating contextualized embeddings, such models may more accurately capture the nuances and complexities of natural language. It reduces the complexity of language data, which facilitates processing and analysis by algorithms. It also balances text representation, guaranteeing coherence among various language processing models and tasks. According to Park et al [30], BPE is not the best language method for pretraining because it does not make good use of the vocabulary space. To evaluate the impact of incorporating subword tokens, Nayak et al [41] contrast the activations of BERT’s attention layers with WordPiece and word -level tokenization. They discover that modeling semantically significant connections between words is hampered by a vocabulary that relies heavily on frequency -based character combinations. 3. Different Tokenization and their Impact on Statistical, Neural and Multilingual Models In this section, different subword tokenizations used for our experiments are discussed. Subword tokenization is an approach that is especially important for Indian languages (ILs) because of their complex morphology and lack of a standardized orthography. The term “complex morphology” refers to how words in a language can change forms depending on the situation. For example, in languages like Bengali or Hindi, words can have different forms depending on factors like number, gender, and tense. Subword toke nization algorithms such as Byte -Pair Encoding (BPE) [9], SentencePiece [21], and WordPiece [25] have successfully handled the complexity of languages like Hindi, Bengali, and Tamil by breaking down words into smaller parts. For breaking down the text into subword units utilizes a range of linguistic characteristics, including grammatical rules, character sequences, and frequency. This division not only helps to capture the language’s morphological complexity but also makes it easier to handle out -of- vocabu lary (OOV) words [42]. When a model encounters an unexpected word, subword tokenization allows it to interpret the word as an ordered set of subword units from its lexicon, increasing its capacity to process and evaluate different text data. This makes it easier to create translation systems that are more precise and suitable for the context, which helps to overcome
https://arxiv.org/abs/2505.16868v1
communication barriers and provides wider access to resources and information across linguistic divides. As research in this area continues, it is anticipated that the accuracy and availability of machine translation for Indian languages will improve, contributing to their preservation, outreach, and broad participation in the digital age. However, it is still unclear which subword tokenization method needs to be used to determine what is best for Indian languages (ILs). 4. Experimental Framework 4.1. Dataset The training dataset for Indic languages(ILs) is taken from Samanantar, a large publicly accessible dataset for ILs [7].This dataset contains more than 49.6 million sentence pairs translated between English and 11 ILs. The Flores200 dataset is used for testing purposes [8]. These datasets are used to evaluate the efficacy of three tokenizers, i.e., BPE [9], SentencePiece [21], and WordPiece, and their impact on ILs using SMT (Statistical Machine Translation), NMT (Neural Machine Translation), and MNMT (Multilingual Neural Machine Translation) models are investigated . 4.2. Preprocessing A few punctuation marks in the extended Unicode have been changed to their standard equivalents. The characters with accents were eliminated. Numbers in Indian languages were converted from English to Indic scripts. The corpus was cleaned up by eliminating unprintable characters, extra spaces, and characters that did not belong in the standard alphabets of the language pair. Finally, the corpus was cleaned up to remove any redundant quotation marks. Further Preprocessing is performed to effectively handle diacritics such as halants and nuktas and change all the text in the English language datasets to lowercase. For example, in Tamil எங ் களிடம ் இப ் ே்பாது is changed into எங களிடம இப ே்பாது . Additional preprocessing, such as changing all text in the language datasets to lowercase. 4.3. Different Tokenization and their Impact on Statistical, Neural and Multilingual Models Byte Pair Encoding (BPE): Byte -pair encoding (BPE) is a compression tokenization utilized in machine translation (MT) for representing a large vocabulary using a small number of subword units [9]. The utilization of byte -pair encoding makes sure that those that are the most common words within the vocabulary will appear as single tokens, while the rarest words are divided into subword tokens. An example of a sentence tokenized using BPE is: It plays an important role in statistical machine translation (SMT), neural machine translation (NMT), and multilingual neural machine translation (MNMT) models. MOSES is an open -source toolkit for SMT. The first step for constructing the SMT model is to preprocess the data. In this step, the text data undergo multiple transformations in the initial preprocessing stage before the SMT model is constructed. To start, all language data is changed to lowercase to maintain corpus consistency. Similarly, to prevent encoding problems and simplify the dataset, accented characters are eliminated from the text. Even for uniformity’s sake, some punctuation marks in the extended Unicode are also changed to their standard counterparts. After that, the corpus is carefully cleaned to remove extraneous spaces, unwanted characters, and nonstandard alphabets unique to the language pair. Then, a few more processes are done for Indian languages, allowing them to
https://arxiv.org/abs/2505.16868v1
efficiently deal with diacritics such as halants and nuktas [43]. With this change, the SMT model is ensured to maintain the linguistic subtleties of Indian languages (ILs). After that, Byte Pair Encoding (BPE) tokenization is applied to the data. For BPE, the subword -nmt is used. In SMT, BPE initially tokenizes the source and target language data into subword units, effectively handling unusual or out -of-vocabulary phrases. By splitting words into smaller parts, BPE ensures the model can acquire significant representations for less common keywords, enhancing translation accuracy. Following the completion of the tokenization process in Statistical Machine Translation (SMT), the next steps include language model training and translation system training. During language model training, a model is created to ensure fluency in output translation. Utilizing the target language data, this model is constructed. The language and translation models are trained on the training dataset and binarized. In Moses, the training method uses word and segment occurrences to connect the target and source languages. Using the training dataset, the language and translation models are binarized after training. After training the language model, the next step is to train translation models. The final model is filtered using the test dataset before being utilized for translating the preprocessed test dataset from the source to the target language. Once training was done, the translation file was detokenized using the Moses detokenizer, and superfluous quote marks were eliminated. Similarly, with NMT, BPE tokenization divides words into subword units, allowing the model to handle text more effectively. This method enables the model to learn from a larger set of subword units and to understand detailed linguistic patterns and nuances in both the source and target languages. The Fairseq library [40] is used for building NMT systems. All NMT models are implemented using the transformer model [39]. The model consists of Six encoder and six decoder layers. The encoder and decoder each have 8 attention heads. The dimensions of the Transformer feed - forward layer are 2048. The embedding dimensions of the decoder and encoder are 512. In MNMT, BPE is critical for handling multilingual datasets. By breaking down text into subword units, BPE allows the model to interpret many languages at the same time. Because of this flexibility, MNMT models can handle a wide range of language pairs and produce appropriate translations in various linguistic circumstances. The MNMT model is constructed using the same configuration that is utilized to build the NMT model utilizing the Transformer model from Fairseq[40]. WordPiece : Like BPE, WordPiece begins with a vocabulary made up of individual characters or bytes, which it then progressively combines into larger subword units according to how frequently those characters appear in the training set [25]. It joins consecutive pairs of whole words or subword units rather than consecutive pairs of bytes. An example of a sentence tokenized using WordPiece is: Using the base vocabulary as a training set, the WordPiece tokenization algorithm selects the pair with the highest likelihood, adds it to the vocabulary, trains the language model using the new vocabulary, and repeats the process
https://arxiv.org/abs/2505.16868v1
until the needed vocabulary size or likelihood threshold is met. In WordPiece, the merge decision depends on a count metric estimated as follows: This count shows how often the two symbols (x, y) occur in the sum of the frequencies of the individual symbols, x and y. The symbols that have the highest count together are chosen to be combined into the vocabulary. For wordpiece, huggingface tokenizers library is utilized. It serves as an important preprocessing step in Statistical Machine Translation (SMT) for translating English to ILs and vice versa. It tokenizes sentences into subword units, ensuring that even uncommon or non-vocabulary terms are correctly represented. Similarly, in Neural Machine Translation (NMT) experiments, wordpiece tokenization comes after preliminary preprocessing, which typically includes steps like lowercase and normalisation. It is used in NMT to separate text into subword units rather than full words. By representing uncommon or non -vocabulary terms with significant subword units, this tokenization helps handle them and improves the model’s ability to analyze and make generalizations from the input data. Once tokenization has been completed, a dictionary is created to map every single subword unit to a distinct identifier, allowing more efficient data processing. The preprocessed data is then subjected to binarization, transforming the textual data into a numerical format appropriate for training the model. The transformer model [37] is used to train the NMT models. The model comprises six encoder -decoder layers that are optimized using the Adam optimizer[17]. Likewise, the MNMT(Multilingual Neural Machine Translation) model uses WordPiece tokenization, which divides each word in the input text into subword units determined by the vocabulary learned while on tokenization. The segmentation process enables the model to deal with various linguistic variations throughout multiple languages by using essential subword units to represent uncommon or out -of-vocabulary terms. Then, the model is trained using the Transformer model[37] from Fairseq[25], where the configuration utilized is the same as that of the NMT model. SentencePiece : Traditional tokenization algorithms have difficulty when dealing with languages that do not utilize spaces for separating words. As these algorithms usually consider a word -space structure in the input text, this presents a significant challenge. However, SentencePiece solves this issue by addressing the input as a raw input stream with spaces as part of the character set [19]. This novel approach allows SentencePiece to deal with languages with diverse word boundaries more efficiently. It does word segmentation regardless of language tokenization conventions by treating spaces as regular characters within the input text. An example of a sentence tokenized using SentencePiece is: It is a flexible subword tokenization algorithm created by Google that uses unsupervised neural network approaches. It analyzes the input sentences directly at the subword level, dissecting them into a series of variable length subword units, in contrast to BPE and WordPiece. It divides sentences into significant subword units by applying a neural network model and a predefined vocabulary of subword units. SentencePiece plays an important role in tokenizing text data for preprocessing in Statistical Machine Translation (SMT), Neural Machine Translation (NMT), and Multilingual Neural Machine Translation
https://arxiv.org/abs/2505.16868v1
(MNMT). For SentencePiece tokenization, the SentencePiece library is utilized. It helps for preprocessing data in both source and target languages in SMT, ensuring that significant linguistic variations and patterns are captured during the tokenization process. Similarly, it is also used in NMT to tokenize input sentence sequences, allowing the model to accommodate a variety of languages and linguistic conventions. Even the SentencePiece helps in MNMT by preprocessing bilingual corpora sets, concatenating them to facilitate training, and creating proper vocabularies for multilingual translation tasks. All of our experiments on the use of tokenizers in Statistical Machine Translation (SMT), Neural Machine Translation (NMT), and Multilingual Neural Machine Translation (MNMT) models show consistent configuration. This consistency ensures dependability and comparable results when comparing the effectiveness of various translation frameworks. 5. Results and Discussion 5.1. Impact on Translation using Different Tokenizations While analyzing the translation using different tokenizers, it has been observed that the translation quality is impacted based on the token splits of the text. For example, while translating the English sentence, “He said, now we have 4 months old rats who are not diabetic who were diabetic” to Hindi language using different tokenizers such as WordPiece, SentencePiece, and Byte Pair Encoding (BPE). 5.2 Analysis of Result Tables 2, 3, and 4 display all evaluation metrics results using various tokenizations with different MT evaluation metrics. MT utilizes a range of measures to assess system performance since each captures a unique component of translation quality. All models utilized for our experiments use Flores200 test sets. Fairseq library with Adam optimizer with betas of (0.9,0.98) for training is used. Our model is run on a high -performance workstation equipped with an Intel Xeon W -1290 CPU, with 10 physical cores a nd 20 threads (3.20 GHz base frequency, up to 5.20 GHz boost), providing robust multi -threading and caching with 20 MiB of L3 cache. The system includes 62 GB of RAM and an NVIDIA Quadro RTX 5000 GPU with 16 GB of VRAM, supported by driver version 535.154.05. The system uses CUDA 11.5 for compilation and is compatible with CUDA 12.2 for runtime operations, optimizing model training performance. Training each MNMT model takes roughly two and a half days, whereas training for SMT and NMT models took from half to two days, according to its data size. MT evaluation metrics are a process that requires not only linguistic accuracy but also fluency, coherence, and semantic alignment with the source text. Each evaluation metric has its own advantage and working principle, which motivates us to check the translation quality. Each metric emphasizes a particular aspect of translation quality, such as semantic accuracy (COMET, METEOR) or structural accuracy or fluency (RIBES, CHRF). Bilingual Evaluation Understudy (BLEU) is one of the famous measures for MT eva luation, and it evaluates the overlap of n -grams (word sequences) among the machine translation and one or more reference translations [1]. It highlights and captures exact matches of words and phrases, giving it an accurate indicator of surface -level similarity across generated and reference translations. The popularity of BLEU derives from its
https://arxiv.org/abs/2505.16868v1
simplicity and computing efficiency. For all the experiments, the BLEU score ranges from 0 to 100. Higher BLEU scores usually suggest translations that closely rese mble human references. Similarly, the Translation Edit Rate (TER) determines the number of edits needed to convert a machine translation into an exact match to the reference translation [4]. Insertions, deletions, substitutions, and shifts are some examples of edits. It can capture and measure the effort required to make a machine translation human -like, demonstrating how much post -editing is required to appear natural. TER scores vary between 0 and 1, with lower scores implying higher translation quality (i.e., fewer edits needed to match the reference translation) and higher scores indicating lower quality. Meanwhile, Metric for Evaluation of Translation with Explicit ORdering (METEOR) tries to improve upon BLEU by incorporating synonyms, partial matches, stemming, and word order [3]. It assigns varying weights to various types of matches. It captures semantic similarity beyond direct word matches by accounting for synonyms and associated words, resulting in a more adaptable measure of meaning overlap. MET EOR, with its support for synonyms and stemming, fits more closely with human judgment on translation adequacy and fluency. However, CHRF analyzes character n -gram matches instead of word n -grams, concentrating on subword similarity [5]. It computes the F -score by balancing precision and recall for these character -level n -grams. It is susceptible to slight spelling or morphological variations, and it works best with morphologically rich and highly inflected languages. CHRF’s emphasis on character similarity may punish translations that rephrase at the sentence level or utilize synonymous phrases, ignoring broader semantic accuracy. The Rank -based Intuitive Bilingual Evaluation Score (RIBES) is intended to assess translation quality with a focus on word order [2]. It assesses the connection of word places in the source and translated sentences. It captures and highlights word order along with syntactic coherence; RIBES preserves structural accuracy, which is critical in translations where word order strongly affects meaning. Similarly, the Cross -lingual Optimized Metric for Evaluation of Translation (COMET) is a neural -based evaluation metric that uses embeddings from pre -trained language models to determine translation quality [6]. It utilizes a regression model trained to estimate human scores by analyzing source and target language translations. It captures the deep semantic alignment among source and target texts, recognizing semantic similarities and faults across languages. To compute the BLEU score, TER, CHRF, and SacreBleu [45] are utilized, whereas the HuggingFace library [46] is used to evaluate the rest of the metrics. Table 2 shows that using Byte Pair Encoding, MNMT models outperformed SMT and NMT models. Using BPE tokenization, MNMT’s BLEU score falls between 5.14 and 32.19 whereas the TER score falls between 56.84 and 102.9. The CHRF ranges from 35.81 to 62.35 and the METEOR score is between 0.24 and 0.63. COMET scores lie between 0.73 to 0.87 and the RIBES score is between 0.34 and 0.79 as shown in Table 2. In SMT, BLEU scores range from 1.3 to 10.19, and TER scores from 81.1 to 118.10. The range of
https://arxiv.org/abs/2505.16868v1
the CHRF score is 26.07 to 46.23. COMET scores fall between 0.49 and 0.79, while METEOR scores span from 0.06 to 0.35. Furthermore, the range of RIBES scores is 0.13 to 0.58. However, for NMT, the ranges of scores are as follows: BLEU scores lie between 0.26 to 32.59 and TER scores range from 57.82 to 178.79, whereas METEOR ranges from 0.07 to 0.61. CHRF from 9.55 to 61.89, COMET from 0.51 to 0.86, and RIBES from 0.144 to 0.78. While comparing the results, it is observed that for MNMT models, the EN -AS model has the lowest BLUE score and EN -HI model achieves the highest BLEU score. The reason could be the smaller corpus of AS, which results in lower translation quality than other languages. At the same time, the combination of qualitative and quantitative data in HI leads to better performance as compared to other languages. MNMT model can handle translation tasks for language pairs despite limited parallel data. It performs better as it can exchange knowledge and enhance the quality of translation for all involved languages by working together to train in multiple languages. It learns common representations throughout languages, allowing it to use similarities between languages to enhance translation quality, particularly for low -resource languages. Table 2 : Evaluation Metrics using B PE as Tokenization Model Language Pair BLEU TER METEOR CHRF RIBES COMET SMT EN-AS 1.6 95.16 0.14 26.07 0.49 0.6 AS-EN 3.35 87.63 0.24 30.06 0.46 0.68 EN-ML 1.3 99.12 0.08 28.7 0.09 0.67 ML-EN 4.29 85.87 0.25 32.06 0.42 0.71 EN-BN 3.06 88.25 0.16 28.58 0.51 0.76 BN-EN 10.19 81.74 0.37 42.25 0.55 0.73 EN-MR 2.1 92.82 0.13 26.31 0.37 0.7 MR-EN 6.7 84.03 0.31 37.15 0.511 0.56 EN-GU 3.68 88.87 0.19 27.86 0.48 0.76 GU-EN 8.4 83.83 0.35 40.66 0.53 0.76 EN-KN 2.13 94.43 0.13 30.23 0.27 0.7 KN-EN 6.53 84.63 0.3 37 0.48 0.7 EN-HI 6.9 82.38 0.25 29.27 0.54 0.49 HI-EN 1.4 96.88 0.06 8.35 0.19 0.63 EN-OR 3.67 94.16 0.22 36.24 0.55 0.72 OR-EN 7.15 84.66 0.36 39.56 0.52 0.74 EN-PA 5.21 83.9 0.241 28.83 0.58 0.74 PA-EN 8.68 81.1 0.35 40.09 0.54 0.71 EN-TE 2.7 94.4 0.14 30.02 0.3 0.72 TE-EN 8.09 84.06 0.34 39.66 0.5 0.71 EN-TA 2.99 118.1 0.16 46.23 0.13 0.73 TA-EN 9.09 87.22 0.38 42.67 0.5 0.79 NMT EN-AS 0.26 135.15 0.07 9.55 0.14 0.54 AS-EN 0.77 178.79 0.17 20.5 0.18 0.51 EN-ML 8.11 106.55 0.29 52.91 0.44 0.83 ML-EN 22.13 71.31 0.55 53.62 0.71 0.83 EN-BN 3.06 88.25 0.16 28.58 0.51 0.76 BN-EN 28.22 62.6 0.61 58.03 0.76 0.84 EN-MR 9.51 97.43 0.34 44.71 0.6 0.81 MR-EN 19.37 73.42 0.51 50.21 0.7 0.69 EN-GU 16.29 81.43 0.42 49.41 0.67 0.84 GU-EN 23.75 70.05 0.57 55.3 0.73 0.85 EN-KN 11.86 89.79 0.34 52.15 0.58 0.82 KN-EN 20.84 74.33 0.53 52.92 0.71 0.82 EN-HI 31.41 57.82 0.56 56.6 0.78 0.86 HI-EN 32.59 57.66 0.65 61.89 0.79 0.79 EN-OR 5.09 99.14 0.24 36.58 0.58 0.75 OR-EN 10.92 84.95 0.38 39.27 0.58 0.75 EN-PA 19.16 73.58 0.48 48.53 0.74 0.84 PA-EN 27.39 61.92 0.59 56.31 0.77 0.81 EN-TE
https://arxiv.org/abs/2505.16868v1
13.73 91.8 0.39 54.41 0.61 0.83 TE-EN 24.52 68.95 0.56 55.36 0.73 0.82 TA-EN 7.03 107.93 0.24 52.64 0.31 0.81 EN-TA 20.99 74.36 0.53 52.32 0.71 0.84 MNMT EN-AS 5.14 93.13 0.24 35.81 0.61 0.8 AS-EN 20.51 72.4 0.51 50.52 0.71 0.78 EN-ML 9.71 94.77 0.3 54.97 0.48 0.85 ML-EN 27.76 64.39 0.6 58.55 0.75 0.85 EN-BN 16.11 73.34 0.42 53.03 0.72 0.86 BN-EN 29 61.83 0.62 58.7 0.77 0.85 EN-MR 11.01 88.57 0.36 47.66 0.64 0.84 MR-EN 27 62.63 0.59 57.2 0.76 0.72 EN-GU 18.23 75.07 0.45 51.82 0.72 0.87 GU-EN 30.93 59.35 0.63 60.67 0.78 0.86 EN-KN 12.11 86.11 0.35 53.17 0.6 0.84 KN-EN 25.5 67.65 0.58 56.08 0.74 0.83 EN-HI 29.89 59.43 0.55 56.17 0.78 0.87 HI-EN 33.31 56.85 0.65 62.35 0.79 0.79 EN-OR 11.21 79.5 0.353 48.38 0.69 0.85 OR-EN 26.91 63.78 0.6 57.36 0.75 0.81 EN-PA 20.82 68.43 0.5 50.43 0.76 0.86 PA-EN 32.19 56.84 0.64 60.3 0.799 0.82 EN-TA 7.75 102.9 0.25 54.1 0.34 0.83 TA-EN 24.44 66.86 0.57 54.91 0.74 0.84 EN-TE 14.58 85.44 0.4 55.32 0.64 0.85 TE-EN 29.32 63.04 0.62 59.83 0.76 0.83 Table 3 displays evaluation metrics for the models that use Sentencepiece for tokenization. The BLEU score for the NMT model, with Sentencepiece tokenization, ranges from 0.48 to 32.54. The METEOR score varies between 0.06 to 0.64, while the TER score falls between 63.48 and 146.82. The range of the Chrf and RIBES scores is 10.38 to 62.05 and 0.100 to 0.77, respectively. COMET score ranges are likewise between 0.54 and 0.85. However, when applying Sentencepiece tokenization for the Statistical MT model, the TER spans from 84.80 to 101.68 and the BLEU score falls between 1.37 and 12.39. The range of the METEOR score is 0.16 to 0.48, while the chrf s core is 28.42 to 49.16. The COMET score ranges from 0.49 to 0.79, while the RIBES score is 0.08 to 0.61. In the same way, the ranges for BLEU and TER scores for MNMT models are 4.6 to 32.97 and 57.89 to 98.41, respectively. CHRF range is 35.76 to 62.35, and the METEOR score ranges from 0.23 to 0.65. The COMET score ranges are 0.71 and 0.87, while the RIBES score is 0.34 to 0.79. Table 3: Evaluation Metrics using Sentencepiece as Tokenization Model Language Pair BLEU TER METEOR CHRF RIBES COMET SMT EN-AS 2.21 96.57 0.16 28.42 0.49 0.61 AS-EN 3.77 87.8 0.27 32.34 0.47 0.7 EN-ML 1.37 100.05 0.069 30.79 0.08 0.51 ML-EN 2.02 91.45 0.15 25.59 0.34 0.68 EN-BN 6.86 89.15 0.28 43.92 0.6 0.79 BN-EN 13 84.8 0.48 49.16 0.59 0.8 EN-MR 4.61 98.37 0.24 38.82 0.47 0.75 MR-EN 9.75 85.35 0.42 45.57 0.55 0.63 EN-GU 4.69 94.96 0.24 34.13 0.43 0.79 GU-EN 11.63 83.63 0.46 48.48 0.58 0.77 EN-KN 4.58 101.68 0.22 42.92 0.38 0.75 KN-EN 10.05 88.33 0.42 46.15 0.51 0.76 EN-HI 4.54 86.03 0.19 27.25 0.48 0.62 HI-EN 4.93 87.95 0.22 34.61 0.46 0.57 EN-OR 3.07 96.77 0.21 37.01 0.54 0.73 OR-EN 7.21 86.45 0.35 41.18 0.51 0.74 EN-PA 8 86.1 0.342 39.54 0.61 0.77 PA-EN 12.39 80.24 0.46 47.65 0.58 0.75
https://arxiv.org/abs/2505.16868v1
EN-TE 6 103.94 0.25 43.81 0.38 0.76 TE-EN 11.39 87.4 0.46 49.06 0.53 0.76 EN-TA 1.31 103.7 0.09 32.42 0.06 0.58 TA-EN 2.6 90.12 0.18 28.11 0.38 0.72 NMT EN-AS 0.48 146.82 0.06 10.38 0.1 0.54 AS-EN 0.84 152.21 0.16 20.49 0.16 0.45 EN-ML 8.7 99.91 0.28 53.43 0.45 0.84 ML-EN 23.12 72.97 0.57 55.43 0.72 0.83 EN-BN 17.43 74.33 0.43 53.47 0.72 0.86 BN-EN 28.61 63.48 0.6 58.62 0.76 0.85 EN-MR 9.88 95.03 0.33 44.23 0.59 0.82 MR-EN 20.38 72.93 0.53 51.82 0.71 0.69 EN-GU 16.54 82.29 0.42 49.97 0.68 0.85 GU-EN 24.79 68.38 0.58 56.4 0.74 0.85 EN-KN 12.18 92.07 0.35 53.1 0.57 0.83 KN-EN 22.42 71.9 0.55 54.22 0.72 0.82 EN-HI 31.85 60.47 0.56 57.03 0.77 0.84 HI-EN 32.54 63.54 0.64 62.05 0.76 0.78 EN-OR 5.4 99.26 0.26 39.11 0.6 0.75 OR-EN 11.11 90.18 0.4 41.06 0.6 0.76 EN-PA 20.3 71.15 0.48 49.32 0.75 0.85 PA-EN 27.4 62.02 0.6 56.87 0.77 0.81 EN-TE 14.22 92.89 0.39 54.61 0.61 0.84 TE-EN 26.62 67.54 0.59 57.89 0.75 0.82 EN-TA 8.42 105.85 0.25 53.87 0.33 0.82 TA-EN 22.69 71.31 0.55 53.75 0.72 0.84 MNMT EN-AS 4.6 98.41 0.23 35.76 0.59 0.8 AS-EN 19.9 75.19 0.51 50.4 0.7 0.78 EN-ML 8.44 96.78 0.29 53.95 0.48 0.85 ML-EN 26.68 66.55 0.59 57.67 0.74 0.84 EN-BN 15.22 76.59 0.41 52.18 0.71 0.86 BN-EN 27.93 63.66 0.6 58.02 0.76 0.85 EN-MR 9.92 92.17 0.35 46.84 0.63 0.84 MR-EN 25.89 65.34 0.59 56.83 0.75 0.71 EN-GU 17.68 77.42 0.45 50.96 0.71 0.87 GU-EN 29.85 61.04 0.63 59.99 0.77 0.86 EN-KN 11.77 87.91 0.35 52.21 0.59 0.84 KN-EN 24.83 68.37 0.57 55.8 0.73 0.83 EN-HI 28.4 61.55 0.54 55.2 0.77 0.86 HI-EN 32.97 57.79 0.65 62.35 0.79 0.79 EN-OR 9.99 83.12 0.35 47.13 0.68 0.85 OR-EN 25.74 67.05 0.59 56.86 0.74 0.8 EN-PA 19.98 69.94 0.49 49.43 0.76 0.86 PA-EN 31.2 57.89 0.63 59.85 0.79 0.82 EN-TE 13.25 87.9 0.39 54.15 0.62 0.84 TE-EN 28.66 64.83 0.61 59.43 0.75 0.83 EN-TA 7.6 102.04 0.25 54.41 0.34 0.83 TA-EN 24.59 67.87 0.56 54.79 0.73 0.85 Table 4 presents the outcomes of word piece tokenization using SMT, NMT, and MMMT. The performance metrics in SMT show a range of values. The range of the BLEU scores is 2.06 to 14.95. RIBES scores range from 0.08 to four. CHRF scores range from 27.13 to 51.82, while COMET scores fluctuate from 0.62 to 0.80. METEOR scores exhibit variability from 0.15 to 0.50, while TER scores range from 78.42 to 109.52. Similarly, the BLEU scores for NMT display a range of 0.48 to 31.95. The range of RIBES scores is 0.12 to 0.78. The range of CHRF scores is 12.22 to 61.45. COMET scores range from 0.52 to 0.86. The range of TER scores is 58.79 to 150.74. METEOR scores vary widely, ranging from 0.06 to 0.62. Likewise, the BLEU scores in MNMT showcase a wider range, ranging from 1.50 to 32.43. The range of RIBES scores is 0.28 to 0.78. The range of CHR F scores is 15.00 to 59.90. The range of COMET scores is 591 to 619. METEOR scores range
https://arxiv.org/abs/2505.16868v1
from 129 to 391 whereas TER scores show variability from 119 to 319. After analyzing the NMT and SMT models, it was found that, for the majority of language pairs, the SentencePiece tokenizer consistently produced better results in terms of BLEU score than other tokenizers. However, it has been found that BPE tokenization performed better than other tokenization techniques in the context of MNMT. Table 4: Evaluation Metrics using WordPiece as Tokenization Model Language PPair BLEU TER METEOR CHRF RIBES COMET SMT EN-AS 2.06 92.84 0.15 27.13 0.51 0.62 AS-EN 3.83 88 0.26 32.07 0.47 0.69 EN-ML 3.47 109.52 0.16 41.57 0.2 0.74 ML-EN 7.95 85.27 0.37 43.81 0.46 0.77 EN-BN 3.03 106.3 0.2 35.08 0.52 0.71 BN-EN 6.64 84.77 0.32 38.86 0.51 0.73 EN-MR 4.59 98.35 0.23 39.08 0.47 0.75 MR-EN 9.75 85.67 0.41 45.63 0.54 0.63 EN-GU 7.5 91.32 0.31 40.94 0.55 0.79 GU-EN 11.45 84.82 0.46 48.35 0.58 0.81 EN-KN 4.72 98.81 0.22 42.94 0.39 0.75 KN-EN 9.72 89.08 0.42 46.02 0.51 0.76 EN-HI 12.57 79.71 0.39 44.2 0.61 0.8 HI-EN 14.95 78.42 0.5 51.82 0.61 0.7 EN-OR 3.65 93.93 0.22 36.91 0.55 0.73 OR-EN 7.39 85.33 0.37 41.18 0.52 0.74 EN-PA 6.84 82.1 0.31 39.54 0.61 0.77 PA-EN 10.39 80.14 0.32 47.65 0.58 0.75 EN-TE 5.74 101.42 0.25 44.19 0.4 0.762 TE-EN 5.74 101.42 0.25 44.19 0.4 0.76 EN-TA 3.59 114.8 0.16 46.68 0.13 0.73 TA-EN 9.21 89.17 0.39 43.74 0.5 0.79 NMT EN-AS 0.48 133.12 0.063 12.22 0.12 0.52 AS-EN 0.75 150.74 0.15 19.69 0.15 0.54 EN-ML 8.76 101.77 0.28 53.8 0.44 0.83 ML-EN 22.74 72.06 0.55 54.71 0.71 0.83 EN-BN 6.62 86.61 0.24 43.08 0.63 0.54 BN-EN 0.42 128.66 0.14 20.23 0.15 0.8 EN-MR 9.37 94.91 0.32 44.73 0.58 0.81 MR-EN 20.32 72.96 0.52 51.9 0.71 0.69 EN-GU 16.56 80.68 0.42 50.05 0.685 0.85 GU-EN 24.33 68.23 0.57 56.27 0.74 0.85 EN-KN 12.96 85.69 0.34 53.44 0.59 0.83 KN-EN 21.63 73.46 0.54 54.11 0.71 0.83 EN-HI 31.19 58.79 0.55 56.75 0.78 0.86 HI-EN 31.95 59.57 0.62 61.45 0.78 0.79 EN-OR 4.34 105.44 0.24 36.98 0.59 0.73 OR-EN 9.1 93 0.36 38.235 0.55 0.74 EN-PA 19.43 71.04 0.47 49.34 0.74 0.84 PA-EN 27.15 63.18 0.59 56.96 0.77 0.81 EN-TE 13.77 89.12 0.38 54.28 0.61 0.83 TE-EN 25.28 68.98 0.57 56.91 0.72 0.82 MNMT EN-AS 1.5 103.8 0.08 15 0.28 0.78 AS-EN 19 74.39 0.47 47.98 0.69 0.57 EN-ML 9.16 94.64 0.27 51.97 0.46 0.85 ML-EN 26.02 66.18 0.58 57.4 0.74 0.83 EN-BN 15.35 74.63 0.4 50.79 0.72 0.85 BN-EN 26.86 65.18 0.58 57.73 0.75 0.84 EN-MR 10.96 87.03 0.35 46.29 0.64 0.81 MR-EN 26.23 64.23 0.58 57.02 0.75 0.71 EN-GU 17.55 76.16 0.43 49.55 0.7 0.87 GU-EN 30.28 59.82 0.62 60.53 0.78 0.86 EN-KN 11.28 86.28 0.33 50.99 0.58 0.84 KN-EN 24.15 68.11 0.56 55.77 0.74 0.82 EN-HI 27.25 61.7 0.52 53.23 0.77 0.86 HI-EN 32.43 58.04 0.63 62.07 0.79 0.78 EN-OR 9.5 81.2 0.32 45.74 0.68 0.81 OR-EN 25.05 66.45 0.573 55.84 0.73 0.8 EN-PA 19.48 68.7 0.47 48.3 0.75 0.86 PA-EN 30.78 57.92 0.62 59.79 0.79 0.81 EN-TE 14.22 84.03 0.37 52.61 0.62 0.84 TE-EN
https://arxiv.org/abs/2505.16868v1
28.8 63.73 0.6 59.9 0.76 0.82 EN-TA 7.56 101.61 0.24 52.09 0.3 0.83 TA-EN 23.12 68.15 0.55 53.97 0.73 0.841 6. Conclusion This paper aims at investigating the effects of various subword tokenization techniques, such as Byte Pair Encoding (BPE), SentencePiece, and WordPiece , on Indian languages. It has been found that SentencePiece tokenization outperforms other tokenizers in both Statistical Machine Translation (SMT) and Neural Machine Translation (NMT) tasks, according to the results of 11 IL experiments. However, BPE tokenization outperforms Multilingual Neural Machine Translation (MNMT). This difference in efficiency can be attributed to the nature of the tasks as well as the linguistic characteristics of the languages used. SentencePiece is particularly well -suited for SMT and NMT tasks due to its ability to capture morphological nuances and handle out -of-vocabulary words. However, in the context of MNMT, BPE’s segmentation strategy effectively represents subword units proving advantageous. The results show that, despite using different tokenizers on the Samanantar dataset for each model (SMT, NMT, and MNMT), translations from ILs to English outperform translations from English to ILs. In future work, this tokenizer can be used to translate Indic to Indic languages and the translation quality can be evaluated further. References [1] Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics , 311 –318, 2002. [2] Tan, L., Dehdari, J., & van Genabith, J. An awkward disparity between BLEU/RIBES scores and human judgements in machine translation . In Proceedings of the 2nd Workshop on Asian Translation (WAT2015) , 74–81, 2015. [3] Banerjee, S., & Lavie, A. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization , 65–72, 2005. [4] Snover, M., Dorr, B., Schwartz, R., Micciulla, L., & Makhoul, J. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers , 223 –231, 2006. [5] Popović, M. chrF: character n -gram F -score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, 392–395, 2015. [6] Rei, R., Stewart, C., Farinha, A. C., & Lavie, A. COMET: A neural framework for MT evaluation. arXiv preprint ar Xiv:2009.09025, 2020. [7] Ramesh, G. et al. Samanantar: The largest publicly available parallel corpora collection for 11 Indic languages. Transactions of the Association for Computational Linguistics , 10, 145 –162, 2022. [8] Costa -jussà, M. R. et al. No language left behind: Scaling human entered machine translation. arXiv preprint arXiv:2207.04672, 2022. [9] Sennrich, R., Haddow, B., & Birch, A. Neural machine translation of rare words with subword units. arXiv preprint arXiv: 1508.07909, 2015. [10] Bala Das, S., Biradar, A., Mishra, T. K., & Patra, B. K. Improving multilingual neural machine translation system for Indic languages . ACM Transactions on Asian and Low -Resource Language Information Processing, 22(6), 1 –24, 2023. [11] Das,
https://arxiv.org/abs/2505.16868v1
S. B., Panda, D., Mishra, T. K., Patra, B. K., & Ekbal, A. Multilingual Neural Machine Translation for Indic to Indic Languages. ACM Transactions on Asian and Low -Resource Language Information Processing, 2024. [12] Das, S. B., Biradar, A., Mishra, T. K., & Patra, B. K. NIT Rourkela machine translation (MT) system submission to WAT 2022 for MultiIndicMT: An Indic language multilingual shared task. 2022. [13] Wu, Y . et al. Google’s neural machine translation system: Bridging the gap between human and machine translation . arXiv preprint arXiv:1609.08144 , 2016. [14] Brown, P . F. et al. A statistical approach to machine translation. Computational Linguistics, 16(2), 79–85, 1990. [15] Forcada, M. L. et al. Apertium: a free/open -source platform for rulebased machine translation. Machine Translation, 25, 127 –144, 2011. [16] Lopez, A. Statistical machine translation. ACM Computing Surveys (CSUR), 40(3), 1 –49, 2008. [17] Bahdanau, D., Cho, K., & Bengio, Y . Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv :1409.0473, 2014. [18] Johnson, M. et al. Google’s multilingual neural machine translation system: Enabling zero -shot translation. Transactions of the Association for Computational Linguistics , 5, 339 –351, 2017. [19] Aharoni, R., Johnson, M., & Firat, O. Massively multilingual neural machine translation. arXiv preprint arXiv:1903.00089 , 2019. [20] Domingo, M. et al. How much does tokenization affect neural machine translation? In International Conference on Computational Linguistics and Intelligent Text Processing , 545 –554, Springer, 2019. [21] Kudo, T., & Richardson, J. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing . arXiv preprint arXiv:1808.06226 , 2018. [22] Khan, N. J., Anwar, W., & Durrani, N. Machine translation approaches and survey for Indian languages . arXiv preprint arXiv:1701.04290 , 2017. [23] Chai, C. P . Comparison of text preprocessing methods. Natural Language Engineering , 29(3), 509–553, 2023. [24] Vikram, S. Morphology: Indian languages and European languages . International Journal of Scientific and Research Publications , 3(6), 1 –5, 2013. [25] Song, X., Salcianu, A., Song, Y ., Dopson, D., & Zhou, D. Fast wordpiece tokenization . arXiv preprint arXiv:2012.15524 , 2020. [26] Bird, S., Klein, E., & Loper, E. NLTK documentation. Online: accessed April, 2008. [27] Choi, J. D., Tetreault, J., & Stent, A. It depends: Dependency parser comparison using a web - based evaluation tool. In Proceedings of ACLIJCNLP , 387 –396, 2015. [28] Mielke, S. J. et al. Between words and characters: A brief history of open -vocabulary modeling and tokenization in NLP . arXiv preprint arXiv:2112.10508 , 2021. [29] Ling, W., Trancoso, I., Dyer, C., & Black, A. W. Character -based neural machine translation. arXiv preprint arXiv:1511.04586 , 2015. [30] Park, K., Lee, J., Jang, S., & Jung, D. An empirical study of tokenization strategies for various Korean NLP tasks . arXiv preprint arXiv:2010.02534 , 2020. [31] Provilkov, I., Emelianenko, D., & Voita, E. BPE -dropout: Simple and effective subword regularization . arXiv preprint arXiv:1910.13267 , 2019 . [32] Bostrom, K., & Durrett, G. Byte pair encoding is suboptimal for language model pretraining. arXiv preprint arXiv:2004.03720 , 2020. [33] Nayak,
https://arxiv.org/abs/2505.16868v1
A., & Timmapathini, H. P . Using integrated gradients and constituency parse trees to explain linguistic acceptability learnt by BERT . arXiv preprint arXiv:2106.07349 , 2021. [34] Gong, X., Xu, H., & Huang, L. HAN: hierarchical association network for computing semantic relatedness. In Proceedings of the AAAI Conference on Artificial Intelligence , 32(1), 2018. [35] Banerjee, T., & Bhattacharyya, P . Meaningless yet meaningful: Morphology grounded subword - level NMT. In Proceedings of the Second Workshop on Subword/Character Level Models, 55–60, 2018. [36] Gaser, M. A. S. A., & Vu, T. Subword -level Segmentation for Neural Machine Translation of Code - switched Dialectal Egyptian ArabicEnglish Text. [37] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. BERT: Pretraining of deep bidirectional transformers for language understanding . arXiv preprint arXiv:1810.04805 , 2018. [38] Trajtenberg, M. AI as the next GPT: a Political -Economy Perspective. National Bureau of Economic Research, 2018. [39] Vaswani, A. et al. Attention is all you need. Advances in Neural Information Processing Systems, 30, 2017. [40] Ott, M. et al. fairseq: A fast, extensible toolkit for sequence modeling. arXiv preprint arXiv:1904.01038, 2019. [41] Nayak, A., Timmapathini, H., Ponnalagu, K., & Venkoparao, V. G. Domain adaptation challenges of BERT in tokenization and sub -word representations of out -of-vocabulary words. In Proceedings of the First Workshop on Insights from Negative Results in NLP , 1–5, 2020. [42] Creutz, M. et al. Morph -based speech recognition and modeling of out -of-vocabulary words across languages. ACM Transactions on Speech and Language Processing (TSLP) , 5(1), 1 –29, 2007. [43] Koehn, P ., & Hoang, H. Moses -Statistical Machine Translation System. User Manual and Code Guide , 2010. [44] Kingma, D. P ., & Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 , 2014. [45] M. Post, A call for clarity in reporting BLEU scores, in Proceedings of the Third Conference on Machine Translation: Research Papers. Belgium, Brussels : Association for Computational Linguistics , Oct. 2018, pp. 186 –191. [Online]. Available: https://www.aclweb.org/anthology/ W18 -6319 . [46] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P . Cistac, T. Rault, R. Louf, M. Funtowicz , J. Davison, S. Shleifer, P . von Platen, C. Ma, Y . Jernite, J. Plu, C. Xu, T. Le Scao, S. Gugger, M. Drame, Q. Lhoest, and A. Rush, Transformers: Stateof -the-art natural language processing, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations , Q. Liu and D. Schlangen, Eds. Association for Computational Linguistics, 2020, pp. 38–45. [47] Das, S. B., Choudhury, S., Mishra, T. K., & Patra, B. K. (2025, January). Investigating the Effect of Backtranslation for Indic Languages. In Proceedings of the First Workshop on Natural Language Processing for Indo -Aryan and Dravidian Language s (pp. 152 -165) .
https://arxiv.org/abs/2505.16868v1
MPO: Multilingual Safety Alignment via Reward Gap Optimization Weixiang Zhao1, Yulin Hu1, Yang Deng2, Tongtong Wu3, Wenxuan Zhang4 Jiahe Guo1,An Zhang5,Yanyan Zhao1*,Bing Qin1,Tat-Seng Chua5,Ting Liu1 1Harbin Institute of Technology,2Singapore Management University,3Monash University 4Singapore University of Technology and Design,5National University of Singapore {wxzhao, yyzhao}@ir.hit.edu.cn Abstract Large language models (LLMs) have become increasingly central to AI applications world- wide, necessitating robust multilingual safety alignment to ensure secure deployment across diverse linguistic contexts. Existing prefer- ence learning methods for safety alignment, such as RLHF and DPO, are primarily mono- lingual and struggle with noisy multilingual data. To address these limitations, we intro- duce Multilingual reward ga POptimization (MPO ), a novel approach that leverages the well-aligned safety capabilities of the domi- nant language ( e.g., English) to improve safety alignment across multiple languages. MPO directly minimizes the reward gap difference between the dominant language and target lan- guages, effectively transferring safety capabili- ties while preserving the original strengths of the dominant language. Extensive experiments on three LLMs, LLaMA-3.1, Gemma-2 and Qwen2.5, validate MPO’s efficacy in multilin- gual safety alignment without degrading gen- eral multilingual utility. Our code is available at:https://github.com/circle-hit/MPO . WARNING: This paper may contain content that is offensive and harmful. 1 Introduction Large language models (LLMs) are increasingly driving global applications (Brown et al., 2020; Touvron et al., 2023a,b; Jiang et al., 2023; Dubey et al., 2024; Team et al., 2024), enabling users from diverse linguistic and cultural backgrounds to access the benefits of AI advancements (Zhao et al., 2024b; Zheng et al., 2024a). In this con- text, achieving multilingual safety alignment is cru- cial to ensuring secure deployment across various languages (Kanepajs et al., 2024; Friedrich et al., 2024). However, recent studies highlight substan- tial differences in the safety challenges faced by LLMs across various languages, with models being *Corresponding authormore prone to generate unsafe responses in low- resource languages. (Yong et al., 2023; Deng et al., 2024; Wang et al., 2024c; Shen et al., 2024). To mitigate such challenge, one straightforward solution is to conduct safety preference alignment for each language, with methods like reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022) or direct preference optimization (DPO) (Rafailov et al., 2023). However, a key issue is the scarcity of multi- lingual data available (Ahmadian et al., 2024; Wu et al., 2024c; Hong et al., 2024a). Though off- the-shelf translation tools could be employed to generate training data in various languages, the re- sulting translations—especially for low-resource languages—are often noisy, riddled with unusual phrasing and inaccurate content (Zhang et al., 2024b; Liu et al., 2024a). On the other hand, cur- rent prevailing preference learning paradigms are highly sensitive to noisy data (Bai et al., 2022; Wang et al., 2024a; Chowdhury et al., 2024; Alfano et al., 2024). In some cases, such noise-induced errors may even cause safety misalignment (Shen et al., 2024; Razin et al., 2024), further exacerbat- ing multilingual safety concerns. To address this challenge, we first conduct an em- pirical analysis on several widely-used LLMs, in- cluding LLaMA-3.1 (Dubey et al., 2024), Gemma- 2 (Team et al., 2024),
https://arxiv.org/abs/2505.16869v1
and Qwen2.5 (Yang et al., 2024a), which have undergone sufficient safety alignment for their dominant language (typically English). We identify a crucial pattern: the implicit reward gap—defined as the log-likelihood differ- ence between safe and unsafe responses—strongly correlates with multilingual safety performance. The dominant language (English) exhibits a sub- stantially larger reward gap (RG) compared to low- resource ones, directly corresponding to its superior safety performance measured by Attack Success Rate (ASR). This inverse RG-ASR relationship es- tablishes the reward gap as a quantifiable indicatorarXiv:2505.16869v1 [cs.CL] 22 May 2025 Arabic (Ar) Safe Unsafe English (En) Safe Unsafe Swahili (Sw) Safe Unsafe Curr ent Pr eference Learning Methods (DPO, KT O and SimPO et al.) Multilingual Reward Gap Optimization (MPO)Reward Gap Log-Likelyhood of Safe ResponseLog-Likelyhood of Unsafe ResponsePreference ModelingFigure 1: Top: Current preference learning methods optimize noisy multilingual preference data. Bottom: Our MPO directly minimizes the discrepancy of reward gap across different languages. of safety alignment quality across languages. Building on these insights, we propose Multilingual reward ga POptimization ( MPO ), a novel alignment paradigm for multilingual safety challenge that transfers safety capabilities from well-aligned dominant languages to others through reward gap optimization. As shown in Figure 1, un- like conventional preference learning approaches that attempt to directly optimize noisy multilin- gual preference data, MPO instead minimizes the discrepancy between the dominant language’s ro- bust and well-established reward gap and target lan- guages’ weaker alignment signals. To preserve the capabilities of the dominant language from degra- dation, we also incorporate constraints that main- tain its hidden representations largely intact. Our extensive experiments on LLaMA-3.1-8B- Instruct, Gemma-2-9B-it and Qwen2.5-7B-Instruct, showcase the efficacy and scalability of MPO in multilingual safety alignment over current prefer- ence learning methods without compromising the general multilingual utility. Deeper analysis reveals that MPO consistently outperforms across training datasets of varying quality. This further confirms that the reward gap of the dominant language serves as a more reliable and scalable supervision signal for effective multilingual safety alignment. The main contributions of this work are summa- rized as follows: •We propose to leverage the well-aligned safety capabilities of the dominant language as a high- quality supervision signal for multilingual safety alignment.•We propose MPO, which directly minimizes the reward gap difference between the dominant lan- guage and target languages, enabling effective multilingual safety alignment. •Experiments on three backbones demonstrate the superior performance of MPO over existing pref- erence learning methods. 2 Preliminaries In this section, we first introduce the formulation for the implicit reward gap re-parameterized by DPO (§2.1), as well as its improvements and opti- mizations in SimPO (§2.2). Specifically, we offer their corresponding interpretations in the context ofmultilingual safety alignment . 2.1 Direct Preference Optimization (DPO) DPO (Rafailov et al., 2023) is one of the most widely used methods for preference learning in LLM alignment. Unlike approaches that involve training an explicit reward model (Ouyang et al., 2022), DPO re-parameterizes the implicit reward function rusing a closed-form expression derived from the Bradley-Terry (BT) model (Bradley and Terry, 1952) with the optimal policy: r(x, y) =βlogπθ(y|x) πref(y|x)+βlogZ(x), x (1) where πθis
https://arxiv.org/abs/2505.16869v1
the policy model, πrefis the reference model, typically the supervised fine-tuned (SFT) checkpoint, βis a hyper-parameter and Z(x)is the partition function. In the context of multilingual safety alignment, the reward gap of the backbone model between safe and unsafe responses in different languages tcan be expressed as: RGt=r(xt, yt w)−r(xt, yt l) =βlogπθ(yt w|xt) πref(ytw|xt)−βlogπθ(yt l|xt) πref(yt l|xt)(2) where the triplet (xt, yt w, yt l)are preference pairs related to safety concerns in language t, consisting of the input query xt, the preferred (safe) response yt w, and the dispreferred (unsafe) response yt l. 2.2 Simple Preference Optimization (SimPO) As pointed out by SimPO (Meng et al., 2024), us- ing Eq. (1) as the implicit reward has the following drawbacks: it creates a mismatch between the re- ward optimized in training and the log-likelihood optimized during inference. To address this issue, En Zh Ko Ar Bn Sw LLaMA-3.1RG↑ 1.58 0.36 0.29 0.60 0.04 0.05 ASR↓9.00 22.00 50.00 15.00 55.00 57.00 Gemma-2RG↑ 2.32 0.69 0.44 0.76 0.42 0.41 ASR↓0.00 9.00 14.00 4.00 24.00 26.00 Qwen-2.5RG↑ 1.87 1.81 0.69 0.78 0.14 0.20 ASR↓13.00 9.00 21.00 20.00 69.00 98.00 Table 1: Results of reward gap (RG) and safety per- formance across six languages. The evaluation metric used for safety is the Attack Success Rate (ASR), where lower values indicate better performance. Results of the dominant languages are highlighted in bold. SimPO considers using the average log-likelihood as the implicit reward: r(x, y) =pθ(y|x) =1 |y|logπθ(y|x) (3) Accordingly, the reward gap is formulated as: RGt=1 |ytw|logπθ(yt w|xt)−1 |yt l|logπθ(yt l|xt)(4) We posit that, compared with Eq. (2), the reward gap in Eq. (4) provides a more accurate measure of safety performance differences across languages due to the following reasons: (1) It aligns with the likelihood metric that governs response generation, where a larger reward gap signifies a higher proba- bility of producing safe responses over unsafe ones, serving as a direct indicator of safety performance. (2) Length normalization mitigates reward errors caused by length bias (Singhal et al., 2023; Park et al., 2024)—unsafe responses, which frequently include specific harmful content, are often longer than safe responses, which typically exhibit con- cise refusal patterns. Please refer to Appendix A for more emperical evidence and discussion. 3Multilingual Reward Gap Optimization In this section, we first demonstrate the relationship between the reward gap and the multilingual safety performance for different languages on three back- bone LLMs (§3.1). Then, we derive the MPO ob- jective (§3.2) and perform gradient analysis (§3.3). 3.1 Reward Gap across Languages Models We select two English-centric LLMs: LLaMA-3.1-8B-Instruct (Dubey et al., 2024) and Gemma-2-9B-it (Team et al., 2024) and one bilin- gual LLMs: Qwen2.5-7B-Instruct (Yang et al., 2024a), to demonstrate the reward gap (Eq. (4)) on safety issues across different languages.Languages We select six languages for evalua- tion based on the availability of language resources. The high-resource languages are English (En) and Chinese (Zh); the medium-resource languages are Korean (Ko) and Arabic (Ar); and the low-resource languages are Bengali (Bn) and Swahili (Sw). For LLaMA-3.1-8B-Instruct and Gemma-2-9B-it, En serves as the dominant language, while that
https://arxiv.org/abs/2505.16869v1
for Qwen2.5-7B-Instruct is En and Zh. Data We utilize the PKU-SafeRLHF dataset (Ji et al., 2024) for the reward gap evaluation across languages. This dataset comprises high-quality English preference pairs focused on safety-related questions. To extend its scope, we randomly sam- ple 100 instances and translate them into each tar- get language using the Google Translate API. Sub- sequently, we query LLMs directly with these mul- tilingual inputs. The reward gap is computed using Eq. (4), while safety performance is evaluated based on the Attack Success Rate (ASR). Analysis According to the results in Table 1, we can draw two key insights: •Inverse relationship between RG and ASR : Higher RG corresponds to lower ASR, indicat- ing better safety performance. This demonstrates that RG can, to some extent, reflect the safety performance of LLMs in a specific language. •Safety performance varies significantly across languages : As reflected in RG values, lower- resource languages exhibit significantly lower RG compared to high-resource dominant ones, underscoring critical safety concerns in lower- resource settings. 3.2 The MPO Objective Based on the above insights, we propose a novel method for multilingual safety alignment called Multilingual reward ga POptimization ( MPO ). It takes the high-quality and well-aligned RG of the dominant language in LLMs as the pivot and aligns the RG of the target language to it. This facilitates the transfer of the dominant language’s safety ca- pabilities to the target language. This process can be formulated as: L1=E(x,yw,yl)∼D βRGt−RGd 2 (5) where tanddrepresent target and dominant lan- guages, respectively. βfunctions to balance and stabilize the optimization (Haarnoja et al., 2018). And RG is calculated by: RGt=1 |ytw|logπθ(yt w|xt)−1 |yt l|logπθ(yt l|xt)(6) RGd=1 |ydw|logπref(yd w|xd)−1 |yd l|logπref(yd l|xd)(7) where the triplets (xt, yt w, yt l)and(xd, yd w, yd l)are preference pairs derived from target and dominant languages, respectively. Here, πθdenotes the pol- icy model, while πrefserves as the reference model. To ensure that the capabilities of the dominant language are not compromised, we constrain the representations of dominant language (at the posi- tion of the last token) to remain largely intact: L2=Exd∼D hd−hd ref 2 (8) where hd refis the representation of dominant lan- guage xdobtained from the reference model. In- spired by recent empirical findings suggesting that modifying the hidden representations of LLMs is more effective for behavior control (Zou et al., 2023), we choose to constrain these representations directly, rather than applying KL-based regulariza- tion on logits (Ziegler et al., 2019). The final optimization objective of MPO is: L=L1+L2 (9) 3.3 What does the MPO update do? The gradient for the learning of target languages with respect to the parameter θcan be written as: ∇θL1(θ) = 2 βE(x,yw,yl)∼D wθ∇θRGt(θ) (10) where ∇θRGt(θ)increases the likelihood of the preferred (safe) response yt wand decreases the like- lihood of dispreferred (unsafe) response yt lfor the target language, which is computed by: ∇θRGt(θ) =1 |ytw|∇θlogπθ yt w|xt −1 |yt l|∇θlogπθ yt l|xt(11) And we have wθ=βRGt(θ)−RGd, which com- pares the reward gap between the target language βRGtand the dominant language RGd. This weight enables the model to adjust
https://arxiv.org/abs/2505.16869v1
both the mag- nitude and direction of its gradient updates, while the extent of gradient descent is not dictated by the model’s likelihood on the dataset. Thus, RGd effectively sets the goal for how strongly the model should discriminate between yt wandyt lin target languages. Please refer to Appendix C for the derivation and detailed discussions.4 Experiments 4.1 Experimental Setup Models We use the same three backbones as in §3.1 to fully validate the efficacy and scalability of our MPO in safety alignment across languages. Languages to be Safety Aligned We select six target languages, reflecting diverse linguistic fam- ilies and resource levels. The high-resource lan- guages are Chinese (Zh) and Japanese (Jp); the medium-resource languages are Korean (Ko) and Arabic (Ar); and the low-resource languages are Bengali (Bn) and Swahili (Sw). For LLaMA- 3.1-8B-Instruct and Gemma-2-9B-it, English (En) serves as the dominant language, while that for Qwen2.5-7B-Instruct is En and Zh. It is crucial to note that these target languages are deemed out-of-scope by the official model providers of the three backbones, who stress the importance of additional alignment efforts to guar- antee safe and responsible deployment. Training Data We sample 100 data points from PKU-SafeRLHF dataset (Ji et al., 2024) and trans- late them into each target language using the Google Translate API. This leads to that all meth- ods are trained under the same 700 pairs of prefer- ence data. Details about the training data can be found in Appendix D. A comprehensive discussion on the effects of various translation tools and data volumes is provided in §5.2. Benchmarks To comprehensively measure the efficacy of MPO on various safety scenarios, we employ 3 benchmarks for evaluation, including two multilingual jailbreak datasets: MultiJail (Deng et al., 2024) and Advbench-X (Yong et al., 2023), and one code-switch attack dataset: CSRT (Yoo et al., 2024). We use the Attack Success Rate (ASR) as our evaluation metric, calculated accord- ing to the evaluation pipeline proposed by Deng et al. (2024) with GPT-4o. Only meaningful refusal responses, excluding unrelated ones, are consid- ered as failed attacks. Please refer to Appendix E for the detailed description of the evaluation setups. Baseline Methods We compare MPO with su- pervised finetuning ( SFT) (Ouyang et al., 2022) and the following preference optimization meth- ods:DPO (Rafailov et al., 2023), IPO (Azar et al., 2024), rDPO (Chowdhury et al., 2024), CPO (Xu et al., 2024b), KTO (Ethayarajh et al., 2024), ORPO (Hong et al., 2024b), R-DPO (Park et al., MultiJail AdvBench-X CSRT En Zh Ko Ar Bn Sw A VG. En Zh Jp Ko Ar Bn Sw A VG. - LLaMA-3.1 14.60 20.32 52.38 16.83 49.52 37.78 31.91 1.54 12.5 17.89 19.23 6.15 40.12 48.56 20.86 18.10 SFT 12.70 9.84 31.43 8.57 31.75 39.37 22.28 5.19 1.73 2.31 10.38 3.08 18.23 17.27 8.31 13.65 DPO 6.35 3.17 15.87 2.54 22.86 37.14 14.65 0.77 1.15 2.88 5.58 0.38 8.83 18.23 5.40 5.71 IPO 7.62 5.08 24.44 2.22 36.51 38.73 19.10 0.38 0.77 3.65 8.85 0.96 10.36 21.88 6.69 3.49 rDPO 15.24 14.13 44.29 18.73 50.79 56.83 33.34 6.35 5.77 3.85
https://arxiv.org/abs/2505.16869v1
11.54 8.08 60.65 56.62 21.84 11.43 CPO 22.85 41.26 29.21 38.10 66.98 66.98 44.23 1.35 2.69 3.85 5.78 1.35 20.96 29.23 9.32 19.37 KTO 4.76 6.67 21.59 4.76 30.79 42.86 18.57 0.58 0.96 3.27 8.46 1.92 11.35 22.84 7.05 7.31 ORPO 9.52 2.86 15.24 1.27 18.73 21.27 11.48 0.19 0.00 0.19 1.35 0.58 11.54 10.75 3.51 3.91 R-DPO 10.16 14.29 35.87 9.84 42.22 46.67 26.51 3.85 3.27 22.31 3.27 5.19 7.49 54.32 14.24 11.43 SimPO 9.21 8.25 30.48 7.30 40.63 42.22 23.02 5.77 3.46 11.73 17.69 5.19 28.94 21.25 13.43 7.62 MPO (Ours) 2.22 0.95 4.76 1.90 12.38 10.79 5.98 0.00 0.19 0.38 2.88 0.00 7.10 5.37 2.27 1.59 Gemma-2 2.54 9.52 14.61 4.13 20.32 14.60 10.95 0.96 1.15 3.08 5.00 3.85 6.72 5.18 3.71 4.76 SFT 2.86 4.44 13.02 4.76 23.17 12.38 10.11 0.19 0.77 1.92 4.42 2.50 5.00 4.22 2.72 5.74 DPO 2.23 7.30 10.79 6.35 23.82 13.33 10.64 0.38 1.73 1.54 3.46 3.08 5.03 3.84 2.72 5.71 IPO 2.86 8.89 16.19 5.08 18.41 14.92 11.06 0.77 1.54 2.50 4.42 3.65 8.25 5.18 3.76 6.37 rDPO 2.54 8.25 14.92 6.35 20.61 14.92 11.27 0.96 1.15 3.27 4.62 3.27 8.45 5.18 3.84 7.62 CPO 3.17 6.67 8.57 4.13 19.68 13.65 9.31 0.38 1.15 1.54 3.85 4.04 6.53 5.57 3.29 5.71 KTO 2.23 6.67 13.97 3.49 20.95 14.92 10.37 0.58 1.15 1.92 4.22 3.08 6.14 4.22 3.04 4.78 ORPO 3.17 6.03 10.16 5.71 17.14 10.48 8.78 0.38 1.54 0.96 2.88 2.12 5.84 4.26 2.57 6.67 R-DPO 3.81 7.62 12.70 6.35 28.25 13.97 12.12 0.58 1.92 4.42 4.81 3.46 7.68 4.80 3.95 6.03 SimPO 2.54 8.57 15.56 4.44 20.95 15.87 11.32 0.58 1.35 2.69 4.42 3.46 7.10 4.61 3.46 6.67 MPO (Ours) 0.63 4.76 6.98 3.81 16.51 7.94 6.77 0.38 0.96 0.19 2.50 2.69 4.22 2.88 1.97 1.90 Table 2: Detailed results on three multilingual safety benchmarks are presented. The evaluation metric used is the Attack Success Rate (ASR), where lower values indicate better performance. The best results achieved by our method and baselines are highlighted in bold, while the second-best results are underlined. MT-Bench M-MMLU MGSM En Mul. En Mul. En Mul. LLaMA-3.1 7.31 4.81 67.70 45.35 88.00 40.13 + MPO 7.25 4.92 67.10 44.67 88.00 44.67 Gemma-2 7.71 6.60 73.40 55.97 90.00 72.93 + MPO 7.83 6.63 73.40 55.92 90.80 74.80 Table 3: Results of the multilingual utility evaluation. En denotes the performance of the dominant language, while Mul. represents the average performance across six target languages: Zh, Jp, Ar, Ko, Bn and Sw. 2024) and SimPO (Meng et al., 2024). Please refer to Appendix F for the detailed description of the baseline methods. Implementation Details All the training exper- iments are conducted on 8 A100 GPUs based on LLaMA-Factory repo (Zheng et al., 2024b). For more details, please refer to the Appendix G. 4.2 Overall Evaluation Table 2 demonstrates the performance compari- son of MPO and baselines based on LLaMA-3.1- 8B-Instruct and Gemma-2-9B-it. Please refer to Appendix H.1 for more results on Qwen2.5-7B- Instruct. From the results across all backbones, we have drawn the following key insights:MPO exhibits robust and
https://arxiv.org/abs/2505.16869v1
consistent perfor- mance across various benchmarks and back- bone models. It consistently surpasses all pref- erence learning methods across three backbone LLMs and benchmarks, highlighting its outstand- ing safety alignment capabilities and scalability. MPO excels in low-resource languages. Exist- ing baseline methods often exhibit biased perfor- mance, disproportionately benefiting high-resource languages (e.g., Zh and Jp) and those where the model already demonstrates strong safety align- ment (e.g., Ar). In contrast, MPO achieves com- prehensive and significant improvements, particu- larly in low-resource languages (e.g., Bn and Sw). This highlights the effectiveness of leveraging high- quality internal safety alignment signals instead of relying exclusively on uneven preference data. MPO maintains multilingual utility. Multilin- gual safety alignment should not compromise the model’s multilingual general utility. Thus, we evaluate the resulting model across three key di- mensions: (1) World Knowledge: M-MMLU (Hendrycks et al., 2021), (2) Reasoning: MGSM (Shi et al., 2023), and (3) Multi-turn Instruction- Following: MT-Bench (Zheng et al., 2023). The results in Table 3 show that MPO consistently main- tains the general utility of both the dominant and MultiJail MT-Bench En Zh Ko Ar Bn Sw A VG. En Zh Jp Ko Ar Bn Sw A VG. MPO 2.22 0.95 4.76 1.90 12.38 10.79 5.98 7.25 5.32 5.26 5.44 5.38 4.11 4.01 5.25 w/o Retain 2.23 0.63 1.90 0.95 10.16 13.33 4.87 7.19 5.09 4.41 5.27 4.79 3.46 3.79 4.86 w/ KL 14.60 22.54 58.73 22.54 58.10 75.87 42.06 7.41 5.33 5.16 5.58 5.38 4.28 4.11 5.32 w/o LN 17.78 26.67 57.46 26.67 65.08 77.14 45.13 7.41 5.61 5.29 5.53 5.38 4.31 4.31 5.41 Table 4: Ablation results on the key components of MPO. The best results are highlighted in bold. 0.1 0.5 1.0 1.58 2.0 5.0 10.0 15.0 20.001020304050Multilingual Safety Performance Original MultiJail MultiJail MT-Bench 4.24.34.44.54.64.74.84.95.05.15.2 Multilingual Utility Performance Original MT-Bench Figure 2: The results of replacing the dominant lan- guage reward gap with a fixed value on multilingual safety and general utility performance. target languages. For detailed results, evaluation settings and the comparison with baseline methods, please refer to Appendix H.2. 5 Analysis and Discussions In this section, we offer a comprehensive analysis of MPO from: (1) ablation studies (§5.1), (2) the influence of preference data quality and quantity (§5.2), and (3) the rewards, representations, and case visualizations of the resulting model (§5.3). Unless stated otherwise, all analysis are conducted using the LLaMA-3.1 backbone. 5.1 Ablation Study Effect of Reward Gap from the Dominant Lan- guage as the Supervision Signal To assess the effectiveness of using the reward gap from the dom- inant language as an alignment objective, we con- duct ablation experiments where we replace it with either a fixed constant or the reward gap of other languages . We also compare MPO against recent cross-lingual transfer methods . Figure 2 shows the impact of replacing the dominant language reward gap with a fixed value (0.1–20) on multilingual safety and general utility. While increasing the constant enhances safety per- formance, it significantly degrades general utility due to excessive parameter shifts, leading to model collapse despite retention constraints. Notably, set-MultiJail En
https://arxiv.org/abs/2505.16869v1
Zh Ko Ar Bn Sw A VG. LLaMA-3.1 14.60 20.32 52.38 16.83 49.52 37.78 31.91 Align with Ar 6.98 6.67 20.00 4.13 17.78 46.35 16.99 Align with Bn 35.56 41.91 72.07 51.75 63.49 84.44 58.20 Align with Sw 20.63 30.16 53.97 26.35 53.97 81.90 44.50 MPO 2.22 0.95 4.76 1.90 12.38 10.79 5.98 Table 5: Multilingual safety performance when replac- ing reward gap with that from Ar, Bn and Sw as the supervision signal. The evaluation metric is the Attack Success Rate (ASR), where lower values indicate better performance. The best results are highlighted in bold. ting the constant to 1.58 (the training set’s aver- age reward gap of the dominant language) yields limited gains, highlighting the superiority of the fine-grained instance-level supervision in our MPO over coarse-grained dataset-level alignment. Please see Appendix H.3 for more details. Table 5 further shows that using the reward gap of a target language as the alignment objective fails to yield meaningful safety improvements. Even when selecting the second-best safety-performing language (Ar) or low-resource languages (Sw, Bn), no effective multilingual safety enhancement is observed. This reinforces that the dominant lan- guage’s reward gap provides a more reliable and high-quality supervision signal. For Qwen2.5, al- though it is a bilingual LLM with both Chinese and English as dominant languages, we find that us- ing Chinese as the alignment target leads to better safety alignment performance compared to using English. Detailed results supporting this observa- tion are provided in Appendix H.1, Table 9. Table 14 in Appendix H.3 compares MPO with state-of-the-art cross-lingual transfer methods, which align multilingual safety by either aligning multilingual representations: CLA (Li et al., 2024a) andLENS (Zhao et al., 2024a), or distilling knowl- edge from the dominant language: SDRRL (Zhang et al., 2024b). MPO consistently outperforms these methods, maintaining strong multilingual safety alignment. This further highlights the advantage of /glyph1197LLB-600M /glyph1197LLB-1.3B /glyph1197LLB-3.3B51015202530Multilingual Safety Performance LLaMA-3.1-8B-InstructMPO DPO IPO KTO ORPO SimPO(a) Impact of Data Quality. 100 200 300 40051015202530354045Multilingual Safety Performance LLaMA-3.1-8B-InstructMPO DPO IPO KTO ORPO SimPO (b) Impact of Data Quantity. Figure 3: Impact of the preference data. (a) Multilingual safety performance on MultiJail with varied data quality. (b) Multilingual safety performance on MultiJail with varied data size. leveraging the dominant language’s reward gap as a fine-grained supervision signal. Effect of Other Components in MPO We fur- ther analyze the effect of other key components in Table 4. Removing the Retain component in Eq. (8) leads to a significant drop in multilingual util- ity, demonstrating its efficacy in preserving cross- lingual robustness. Introducing a KL-divergence- based constraint imposes a strong regularization that restricts the alignment of reward gap distribu- tions across different languages, limiting the flex- ibility of MPO in adapting to multilingual safety preferences. Finally, removing length normaliza- tion (LN) in reward gap computation results in biased reward gap values, particularly in safety sce- nario that unsafe responses are often longer than safe ones, highlighting that LN effectively miti- gates length-induced bias and facilitates more sta- ble multilingual safety alignment. 5.2 The Impact of Preference Data Impact of Data Quality To evaluate the
https://arxiv.org/abs/2505.16869v1
robust- ness of MPO across different levels of multilingual preference data quality, we employ three versions of the dataset obtained using three NLLB (Costa- jussà et al., 2022) translation models of varying sizes: NLLB-600M, NLLB-1.3B, and NLLB-3.3B. These models represent a progressive improvement in translation quality, with the largest model gener- ally producing more accurate translations. Results are shown in Figure 3(a). Baselines show considerable performance vari- ations across different data quality levels, strug- gling to maintain stable safety alignment—even when trained on the highest-quality preference data (NLLB-3.3B). This underscores the challenges thatnoisy multilingual data pose for existing alignment methods. In contrast, MPO consistently delivers the best results across all data quality levels, demon- strating its stability and resilience to data noise. This validates the effectiveness of leveraging the reward gap in the dominant language as a source of high-quality supervision. Further, recent studies explore LLMs themselves to generate multilingual preference data, rather than relying on external translation tools (She et al., 2024; Yang et al., 2024b). MPO consistently achieves the best multilingual safety alignment re- sults data sources, demonstrating its robustness to variations in preference data. Please refer to Ap- pendix H.4 for detailed results and analysis. Impact of Data Quantity Figure 3(b) compares MPO with baseline methods across varying dataset sizes, with the x-axis representing the number of preference samples per language. MPO maintains stable performance across different data volumes, consistently outperforming baselines. However, all methods, including MPO, exhibit diminishing re- turns as data increases, with baseline performance even degrading with excessive data. This high- lights that enhancing supervision signal quality is far more effective than simply increasing data volume, aligning with broader LLM post-training trends (Zhou et al., 2023; Cao et al., 2024; Guo et al., 2025; Ye et al., 2025). 5.3 Visualization Analysis To better illustrate the impact of MPO on multilin- gual safety alignment, we visualize changes in the reward gap and the model’s internal representation space. In Figure 4, MPO consistently achieves a higher reward gap than DPO across all languages. En Zh Ar Ko Bn Sw0.000.250.500.751.001.251.501.75Reward GapLLaMA-3.1 DPO MPOFigure 4: Reward gap across languages for the original backbone and those safety aligned by MPO and DPO. Notably, it significantly improves low-resource lan- guages such as Swahili and Bengali, reducing the performance gap with English. Further, the visu- alization of the model’s representation space in Figure 5, shows that MPO enables a clearer distinc- tion between safe and unsafe responses in the target language Sw. This suggests that MPO enhances the model’s ability to differentiate safety-critical responses, reinforcing its effectiveness in multilin- gual safety alignment. Please refer to Appendix H.5 for more visualization results. 6 Related Works Multilingual Safety Vulnerability Recent stud- ies have exposed risks in the multilingual safety of LLMs, underscoring the need for multilingual safety alignment (Qin et al., 2024; Li et al., 2024c; Gupta et al., 2024; Kanepajs et al., 2024; Verma and Bharadwaj, 2025). One line of approaches translate harmful prompts from high-resource to low-resource languages to assess safety (Yong et al., 2023; Deng et al., 2024; Xu et al., 2024b;
https://arxiv.org/abs/2505.16869v1
Shen et al., 2024; Li et al., 2024b; Wang et al., 2024c; Poppi et al., 2024), as seen in Deng et al. (2024), which manually translated 315 English safety prompts (Ganguli et al., 2022) into nine languages. Others evaluate multilingual safety us- ing code-switching, embedding multiple languages within the same harmful input (Gutiérrez-Clellen, 1999; Yoo et al., 2024; Song et al., 2024b; Upad- hayay and Behzadan, 2024). While these works have established a solid testbed for multilingual safety in LLMs, they have yet to introduce effective solutions to the existing challenges in this domain. Safety Alignment Technique DPO (Rafailov et al., 2023) has emerged as a widely adopted of- 60 40 20 0 20 40 6040 20 0204060 safe_en safe_sw unsafe_en unsafe_swFigure 5: The visualization of multilingual representa- tions for English and Swahili. fline preference learning method for aligning LLMs with human safety principles and values. In addi- tion to DPO, various preference optimization ob- jectives have been introduced. Ranking-based ob- jectives enable comparisons among more than two instances (Dong et al., 2023; Yuan et al., 2023; Liu et al., 2024b; Song et al., 2024a). IPO (Azar et al., 2024) mitigates the overfitting issues inherent in DPO, while KTO (Ethayarajh et al., 2024) ad- dresses preference optimization in non-pairwise data settings. Meanwhile, ORPO (Hong et al., 2024b) and SimPO (Meng et al., 2024) seek to remove reliance on a reference model. Our proposed MPO stands out from existing methods in that we seek multilingual supervision signals from the internal reward gap of the LLMs, which specifically addresses the challenge of un- even data quality in multilingual safety alignment and offers fresh insights and new opportunities for achieving effective multilingual safety alignment. 7 Conclusion In this paper, we introduce MPO, a novel approach to multilingual safety alignment that leverages the reward gap of the dominant language as a high- quality supervision signal. MPO directly mini- mizes the discrepancy of reward gap across dif- ferent languages to transfer safety alignment effec- tively. Experiments on LLaMA-3.1, Gemma-2, and Qwen2.5 confirm that MPO outperforms existing methods in multilingual safety alignment without compromising general multilingual utility. Further analysis shows that MPO remains robust across varying data qualities and sources, reinforcing the superiority of the dominant language’s reward gap as a scalable alignment signal. These results estab- lish MPO as a practical and effective solution for deploying multilingual-safe LLMs. Limitations This work has several limitations that provide di- rections for future research. Due to computational constraints, we conduct experiments on mid-scale models and did not extend our evaluation to larger- scale ones such as 32B even 72B LLMs. Future work should explore whether MPO scales effec- tively with larger models and whether its advan- tages persist at greater parameter sizes. Additionally, we have focused exclusively on the application of MPO to multilingual safety align- ment. However, there are more challenging and di- verse alignment tasks that could be explored in the future, particularly those involving multicultural value alignment (Sorensen et al., 2024; Yao et al., 2024; Cahyawijaya et al., 2024). As multilingual safety alignment is only one aspect
https://arxiv.org/abs/2505.16869v1
of broader eth- ical considerations, future work could extend the current methodology to tackle these value align- ment challenges, ensuring models respect different cultural norms and ethical standards across regions. Furthermore, given that safety guidelines are universal principles that users across various lin- guistic and cultural regions must adhere to, as em- phasized in OpenAI (OpenAI, 2024b) and Meta’s user guidelines (AI, 2024), it is reasonable to trans- fer the safety alignment of the dominant language to other languages. This idea has proven effective in our experiments, and we believe it could be val- idated in broader multilingual tasks in the future, particularly those that are language-agnostic, such as general problem-solving skills (Hu et al., 2024; Zhang et al., 2024a; Huang et al., 2024; Wang et al., 2024b). Future work could explore these areas and broaden the scope of multilingual model evalua- tion, to ensure that advanced AI technologies are universally applicable and can promote responsible and ethical AI development on a global scale. We hope the research community continues to push forward in advancing these technologies and facili- tating their global adoption. Ethical Considerations This work is conducted solely for academic re- search purposes and aims to address multilingual safety risks in large language models (LLMs). The primary goal of our study is to improve the ro- bustness and consistency of LLMs across different languages, ensuring that they adhere to establishedsafety principles regardless of linguistic variations. We acknowledge that multilingual safety alignment is a complex challenge, and our research does not aim to impose any specific cultural or ethical stan- dards on diverse linguistic communities. Instead, our approach focuses on enhancing model consis- tency in following universally recognized safety guidelines, as outlined in user policies of major AI developers such as OpenAI and Meta. By ensuring equitable safety alignment across languages, we seek to mitigate risks associated with uneven safety performance in LLMs and reduce potential harm in lower-resource languages. In conclusion, we aim to contribute to the devel- opment of fair, transparent, and globally applicable AI systems that align with responsible AI deploy- ment principles. We encourage further community- driven research to refine multilingual safety align- ment and promote the ethical and safe application of AI technologies worldwide. Acknowledgments We thank the anonymous reviewers for their com- ments and suggestions. This work was supported by the National Key RD Program of China via grant 2021YFF0901602, the National Natural Science Foundation of China (NSFC) via grant 62176078, the Fundamental Research Funds for the Central Universities and the Singapore Ministry of Educa- tion (MOE) Academic Research Fund (AcRF) Tier 1 grant (No. MSS24C012). References Arash Ahmadian, Beyza Ermis, Seraphina Goldfarb- Tarrant, Julia Kreutzer, Marzieh Fadaee, Sara Hooker, et al. 2024. The multilingual alignment prism: Align- ing global and local preferences to reduce harm. InProceedings of the 2024 Conference on Empir- ical Methods in Natural Language Processing , pages 12027–12049. Meta AI. 2024. Meta safety policies. Meta . Carlo Alfano, Silvia Sapora, Jakob Nicolaus Foerster, Patrick Rebeschini, and Yee Whye Teh. 2024. Learn- ing loss landscapes in preference optimization. arXiv preprint arXiv:2411.06568 .
https://arxiv.org/abs/2505.16869v1
Mohammad Gheshlaghi Azar, Zhaohan Daniel Guo, Bi- lal Piot, Remi Munos, Mark Rowland, Michal Valko, and Daniele Calandriello. 2024. A general theoret- ical paradigm to understand learning from human preferences. In International Conference on Arti- ficial Intelligence and Statistics , pages 4447–4455. PMLR. Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 . Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika , 39(3/4):324– 345. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems , 33:1877–1901. Samuel Cahyawijaya, Delong Chen, Yejin Bang, Leila Khalatbari, Bryan Wilie, Ziwei Ji, Etsuko Ishii, and Pascale Fung. 2024. High-dimension human value representation in large language models. arXiv preprint arXiv:2404.07900 . Boxi Cao, Keming Lu, Xinyu Lu, Jiawei Chen, Mengjie Ren, Hao Xiang, Peilin Liu, Yaojie Lu, Ben He, Xianpei Han, et al. 2024. Towards scalable auto- mated alignment of llms: A survey. arXiv preprint arXiv:2406.01252 . Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, and Quanquan Gu. 2024. Self-play fine-tuning con- verts weak language models to strong language mod- els. In Forty-first International Conference on Ma- chine Learning . Sayak Ray Chowdhury, Anush Kini, and Nagarajan Natarajan. 2024. Provably robust dpo: Aligning lan- guage models with noisy feedback. In Forty-first International Conference on Machine Learning . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 . Marta R Costa-jussà, James Cross, Onur Çelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. 2022. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672 . Yue Deng, Wenxuan Zhang, Sinno Jialin Pan, and Li- dong Bing. 2024. Multilingual jailbreak challenges in large language models. In The Twelfth Interna- tional Conference on Learning Representations . Hanze Dong, Wei Xiong, Deepanshu Goyal, Yihan Zhang, Winnie Chow, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. 2023. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767 .Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. Kto: Model alignment as prospect theoretic optimization. arXiv preprint arXiv:2402.01306 . Julen Etxaniz, Gorka Azkune, Aitor Soroa, Oier Lacalle, and Mikel Artetxe. 2024. Do multilingual language models think better in english? In Proceedings of the 2024 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers) , pages 550–564. Felix Friedrich, Simone Tedeschi, Patrick Schramowski,
https://arxiv.org/abs/2505.16869v1
Manuel Brack, Roberto Navigli, Huu Nguyen, Bo Li, and Kristian Kersting. 2024. Llms lost in translation: M-alert uncovers cross-linguistic safety gaps. arXiv preprint arXiv:2412.15035 . Deep Ganguli, Liane Lovitt, Jackson Kernion, Amanda Askell, Yuntao Bai, Saurav Kadavath, Ben Mann, Ethan Perez, Nicholas Schiefer, Kamal Ndousse, et al. 2022. Red teaming language models to re- duce harms: Methods, scaling behaviors, and lessons learned. arXiv preprint arXiv:2209.07858 . Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. 2025. Deepseek-r1: In- centivizing reasoning capability in llms via reinforce- ment learning. arXiv preprint arXiv:2501.12948 . Shangmin Guo, Biao Zhang, Tianlin Liu, Tianqi Liu, Misha Khalman, Felipe Llinares, Alexandre Rame, Thomas Mesnard, Yao Zhao, Bilal Piot, et al. 2024. Direct language model alignment from online ai feed- back. arXiv preprint arXiv:2402.04792 . Prannaya Gupta, Le Yau, Hao Low, I-Shiang Lee, Hugo Lim, Yu Teoh, Koh Hng, Dar Liew, Rishabh Bhard- waj, Rajat Bhardwaj, et al. 2024. Walledeval: A comprehensive safety evaluation toolkit for large lan- guage models. In Proceedings of the 2024 Confer- ence on Empirical Methods in Natural Language Processing: System Demonstrations , pages 397–407. Vera F Gutiérrez-Clellen. 1999. Language choice in in- tervention with bilingual children. American Journal of Speech-Language Pathology , 8(4):291–302. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. 2018. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International conference on machine learning , pages 1861–1870. PMLR. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language under- standing. In International Conference on Learning Representations . Jiwoo Hong, Noah Lee, Rodrigo Martínez-Castaño, César Rodríguez, and James Thorne. 2024a. Cross- lingual transfer of reward models in multilingual alignment. arXiv preprint arXiv:2410.18027 . Jiwoo Hong, Noah Lee, and James Thorne. 2024b. Orpo: Monolithic preference optimization without reference model. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 11170–11189. Peng Hu, Sizhe Liu, Changjiang Gao, Xin Huang, Xue Han, Junlan Feng, Chao Deng, and Shujian Huang. 2024. Large language models are cross- lingual knowledge-free reasoners. arXiv preprint arXiv:2406.16655 . Haoyang Huang, Tianyi Tang, Dongdong Zhang, Xin Zhao, Ting Song, Yan Xia, and Furu Wei. 2023. Not all languages are created equal in llms: Improv- ing multilingual capability by cross-lingual-thought prompting. In The 2023 Conference on Empirical Methods in Natural Language Processing . Yue Huang, Chenrui Fan, Yuan Li, Siyuan Wu, Tianyi Zhou, Xiangliang Zhang, and Lichao Sun. 2024. 1+ 1> 2: Can large language models serve as cross- lingual knowledge aggregators? In Proceedings of the 2024 Conference on Empirical Methods in Natu- ral Language Processing , pages 13394–13412. Jiaming Ji, Donghai Hong, Borong Zhang, Boyuan Chen, Josef Dai, Boren Zheng, Tianyi Qiu, Boxun Li, and Yaodong Yang. 2024. Pku-saferlhf: Towards multi-level safety alignment for llms with human preference. arXiv preprint arXiv:2406.15513 . Albert Q Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, et al.
https://arxiv.org/abs/2505.16869v1
2023. Mistral 7b.arXiv preprint arXiv:2310.06825 . Arturs Kanepajs, Vladimir Ivanov, and Richard Moulange. 2024. Towards safe multilingual frontier ai. In Workshop on Socially Responsible Language Modelling Research . Sungdong Kim and Minjoon Seo. 2024. Rethinking the role of proxy rewards in language model alignment. InProceedings of the 2024 Conference on Empiri- cal Methods in Natural Language Processing , pages 20656–20674. Chong Li, Shaonan Wang, Jiajun Zhang, and Chengqing Zong. 2024a. Improving in-context learning of multilingual generative language models with cross- lingual alignment. In Proceedings of the 2024 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (Volume 1: Long Papers) , pages 8051–8069. Jie Li, Yi Liu, Chongyang Liu, Ling Shi, Xiaoning Ren, Yaowen Zheng, Yang Liu, and Yinxing Xue. 2024b. A cross-language investigation into jailbreakattacks in large language models. arXiv preprint arXiv:2401.16765 . Yahan Li, Yi Wang, Yi Chang, and Yuan Wu. 2024c. Xtrust: On the multilingual trustworthiness of large language models. arXiv preprint arXiv:2409.15762 . Chaoqun Liu, Wenxuan Zhang, Yiran Zhao, Anh Tuan Luu, and Lidong Bing. 2024a. Is translation all you need? a study on solving multilingual tasks with large language models. arXiv preprint arXiv:2403.10258 . Tianqi Liu, Zhen Qin, Junru Wu, Jiaming Shen, Misha Khalman, Rishabh Joshi, Yao Zhao, Mohammad Saleh, Simon Baumgartner, Jialu Liu, et al. 2024b. Lipo: Listwise preference optimization through learning-to-rank. arXiv preprint arXiv:2402.01878 . Yu Meng, Mengzhou Xia, and Danqi Chen. 2024. Simpo: Simple preference optimization with a reference-free reward. arXiv preprint arXiv:2405.14734 . OpenAI. 2024a. Gpt-4o system card. OpenAI . OpenAI. 2024b. Openai use policies. OpenAI . Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instruc- tions with human feedback. Advances in neural in- formation processing systems , 35:27730–27744. Ryan Park, Rafael Rafailov, Stefano Ermon, and Chelsea Finn. 2024. Disentangling length from qual- ity in direct preference optimization. arXiv preprint arXiv:2403.19159 . Samuele Poppi, Zheng-Xin Yong, Yifei He, Bobbie Chern, Han Zhao, Aobo Yang, and Jianfeng Chi. 2024. Towards understanding the fragility of mul- tilingual llms against fine-tuning attacks. arXiv preprint arXiv:2410.18210 . Libo Qin, Qiguang Chen, Fuxuan Wei, Shijue Huang, and Wanxiang Che. 2023. Cross-lingual prompt- ing: Improving zero-shot chain-of-thought reasoning across languages. In Proceedings of the 2023 Con- ference on Empirical Methods in Natural Language Processing , pages 2695–2709. Libo Qin, Qiguang Chen, Yuhang Zhou, Zhi Chen, Yinghui Li, Lizi Liao, Min Li, Wanxiang Che, and Philip S Yu. 2024. Multilingual large language model: A survey of resources, taxonomy and fron- tiers. arXiv preprint arXiv:2404.04925 . Rafael Rafailov, Archit Sharma, Eric Mitchell, Christo- pher D Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your lan- guage model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728– 53741. Leonardo Ranaldi, Giulia Pucci, Federico Ranaldi, Elena Sofia Ruzzetti, and Fabio Massimo Zanzotto. 2023. Empowering multi-step reasoning across languages via tree-of-thoughts. arXiv preprint arXiv:2311.08097 . Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. 2020. Deepspeed: System optimiza- tions
https://arxiv.org/abs/2505.16869v1
enable training deep learning models with over 100 billion parameters. In Proceedings of the 26th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining , pages 3505–3506. Noam Razin, Sadhika Malladi, Adithya Bhaskar, Danqi Chen, Sanjeev Arora, and Boris Hanin. 2024. Un- intentional unalignment: Likelihood displacement in direct preference optimization. In NeurIPS 2024 Workshop on Mathematics of Modern Machine Learn- ing. Corby Rosset, Ching-An Cheng, Arindam Mi- tra, Michael Santacroce, Ahmed Awadallah, and Tengyang Xie. 2024. Direct nash optimization: Teaching language models to self-improve with gen- eral preferences. arXiv preprint arXiv:2404.03715 . Shuaijie She, Wei Zou, Shujian Huang, Wenhao Zhu, Xiang Liu, Xiang Geng, and Jiajun Chen. 2024. MAPO: Advancing multilingual reasoning through multilingual-alignment-as-preference optimization. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10015–10027. Lingfeng Shen, Weiting Tan, Sihao Chen, Yunmo Chen, Jingyu Zhang, Haoran Xu, Boyuan Zheng, Philipp Koehn, and Daniel Khashabi. 2024. The language barrier: Dissecting safety challenges of LLMs in multilingual contexts. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 2668–2680. Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush V osoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. 2023. Language models are multilingual chain-of-thought reasoners. In The Eleventh International Conference on Learning Representations . Prasann Singhal, Tanya Goyal, Jiacheng Xu, and Greg Durrett. 2023. A long way to go: Investi- gating length correlations in rlhf. arXiv preprint arXiv:2310.03716 . Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, and Houfeng Wang. 2024a. Pref- erence ranking optimization for human alignment. InProceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 18990–18998. Jiayang Song, Yuheng Huang, Zhehua Zhou, and Lei Ma. 2024b. Multilingual blending: Llm safety alignment evaluation with language mixture. arXiv preprint arXiv:2407.07342 .Taylor Sorensen, Jared Moore, Jillian Fisher, Mitchell Gordon, Niloofar Mireshghallah, Christopher Michael Rytting, Andre Ye, Li- wei Jiang, Ximing Lu, Nouha Dziri, et al. 2024. A roadmap to pluralistic alignment. arXiv preprint arXiv:2402.05070 . Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, et al. 2024. Gemma 2: Improving open language models at a practical size. arXiv preprint arXiv:2408.00118 . Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971 . Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288 . Bibek Upadhayay and Vahid Behzadan. 2024. Sand- wich attack: Multi-language mixture adaptive attack on llms. arXiv preprint arXiv:2404.07242 . Nikhil Verma and Manasa Bharadwaj. 2025. The hidden space of safety: Understanding preference- tuned llms in multilingual context. arXiv preprint arXiv:2504.02708 . Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, et al. 2024a.
https://arxiv.org/abs/2505.16869v1
Secrets of rlhf in large language models part ii: Reward modeling. arXiv preprint arXiv:2401.06080 . Weixuan Wang, Barry Haddow, Minghao Wu, Wei Peng, and Alexandra Birch. 2024b. Sharing mat- ters: Analysing neurons across languages and tasks in llms. arXiv preprint arXiv:2406.09265 . Wenxuan Wang, Zhaopeng Tu, Chang Chen, Youliang Yuan, Jen-tse Huang, Wenxiang Jiao, and Michael Lyu. 2024c. All languages matter: On the multilin- gual safety of LLMs. In Findings of the Association for Computational Linguistics: ACL 2024 , pages 5865–5877. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. Advances in neural information processing systems , 35:24824–24837. Junkang Wu, Xue Wang, Zhengyi Yang, Jiancan Wu, Jinyang Gao, Bolin Ding, Xiang Wang, and Xiangnan He. 2024a. α-dpo: Adaptive reward margin is what direct preference optimization needs. arXiv preprint arXiv:2410.10148 . Yue Wu, Zhiqing Sun, Huizhuo Yuan, Kaixuan Ji, Yim- ing Yang, and Quanquan Gu. 2024b. Self-play pref- erence optimization for language model alignment. arXiv preprint arXiv:2405.00675 . Zhaofeng Wu, Ananth Balashankar, Yoon Kim, Jacob Eisenstein, and Ahmad Beirami. 2024c. Reuse your rewards: Reward model transfer for zero-shot cross- lingual alignment. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 1332–1353. Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Mur- ray, and Young Jin Kim. 2024a. Contrastive pref- erence optimization: Pushing the boundaries of llm performance in machine translation. In Forty-first International Conference on Machine Learning . Nan Xu, Fei Wang, Ben Zhou, Bangzheng Li, Chaowei Xiao, and Muhao Chen. 2024b. Cognitive overload: Jailbreaking large language models with overloaded logical thinking. In Findings of the Association for Computational Linguistics: NAACL 2024 , pages 3526–3548. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. 2024a. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 . Wen Yang, Junhong Wu, Chen Wang, Chengqing Zong, and Jiajun Zhang. 2024b. Language imbalance driven rewarding for multilingual self-improving. arXiv preprint arXiv:2410.08964 . Jing Yao, Xiaoyuan Yi, Yifan Gong, Xiting Wang, and Xing Xie. 2024. Value fulcra: Mapping large lan- guage models to the multidimensional spectrum of basic human value. In Proceedings of the 2024 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (Volume 1: Long Papers) , pages 8754–8777. Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. 2025. Limo: Less is more for reasoning. arXiv preprint arXiv:2502.03387 . Zheng Xin Yong, Cristina Menghini, and Stephen Bach. 2023. Low-resource languages jailbreak gpt-4. In Socially Responsible Language Modelling Research . Haneul Yoo, Yongjin Yang, and Hwaran Lee. 2024. Code-switching red-teaming: Llm evaluation for safety and multilingual understanding. arXiv preprint arXiv:2406.15481 . Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. Rrhf: Rank responses to align language models with human feed- back. Advances in Neural Information Processing Systems , 36.Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Xian
https://arxiv.org/abs/2505.16869v1
Li, Sainbayar Sukhbaatar, Jing Xu, and Ja- son E Weston. 2024. Self-rewarding language mod- els. In Forty-first International Conference on Ma- chine Learning . Shimao Zhang, Changjiang Gao, Wenhao Zhu, Jiajun Chen, Xin Huang, Xue Han, Junlan Feng, Chao Deng, and Shujian Huang. 2024a. Getting more from less: Large language models are good spontaneous mul- tilingual learners. In Proceedings of the 2024 Con- ference on Empirical Methods in Natural Language Processing , pages 8037–8051. Yuanchi Zhang, Yile Wang, Zijun Liu, Shuo Wang, Xiaolong Wang, Peng Li, Maosong Sun, and Yang Liu. 2024b. Enhancing multilingual capabilities of large language models through self-distillation from resource-rich languages. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 11189– 11204. Weixiang Zhao, Yulin Hu, Jiahe Guo, Xingyu Sui, Tong- tong Wu, Yang Deng, Yanyan Zhao, Bing Qin, Wanx- iang Che, and Ting Liu. 2024a. Lens: Rethinking multilingual enhancement for large language models. arXiv preprint arXiv:2410.04407 . Wenting Zhao, Xiang Ren, Jack Hessel, Claire Cardie, Yejin Choi, and Yuntian Deng. 2024b. Wildchat: 1m chatgpt interaction logs in the wild. In The Twelfth International Conference on Learning Representa- tions . Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Tianle Li, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zhuohan Li, Zi Lin, Eric Xing, et al. 2024a. Lmsys- chat-1m: A large-scale real-world llm conversation dataset. In The Twelfth International Conference on Learning Representations . Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems , 36:46595–46623. Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, and Zheyan Luo. 2024b. LlamaFactory: Unified efficient fine-tuning of 100+ language models. In Proceedings of the 62nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 3: System Demonstrations) , pages 400–410. Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023. Lima: Less is more for align- ment. Advances in Neural Information Processing Systems , 36. Zhenglin Zhou, Xiaobo Xia, Fan Ma, Hehe Fan, Yi Yang, and Tat-Seng Chua. 2025. Dreamdpo: Aligning text-to-3d generation with human prefer- ences via direct preference optimization. arXiv preprint arxiv:2502.04370 . Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Chris- tiano, and Geoffrey Irving. 2019. Fine-tuning lan- guage models from human preferences. arXiv preprint arXiv:1909.08593 . Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, et al. 2023. Representation engineering: A top- down approach to ai transparency. arXiv preprint arXiv:2310.01405 .En Zh Ko Ar Bn Sw LLaMA-3.1RG↑ 27.02 25.09 22.82 27.85 32.21 29.77 ASR↓ 9.00 22.00 50.00 15.00 55.00 57.00 Gemma-2RG↑214.57 63.92 45.54 88.46 80.62 96.60 ASR↓ 0.00 9.00 14.00 4.00 24.00 26.00 Qwen-2.5RG↑ 13.84 12.31 9.77 19.76 5.04 23.77 ASR↓13.00 9.00 21.00 20.00 69.00 98.00 Table 6: Results of reward gap (calculated by Eq. 2) and safety performance across six languages. The evaluation metric
https://arxiv.org/abs/2505.16869v1
used for safety is the Attack Success Rate (ASR), where lower values indicate better performance. Results of the dominant languages are highlighted in bold. A Further Discussion on Reward Gap We have already discussed in §2.2 that using Eq. (4) within SimPO (Meng et al., 2024) to compute the reward gap is more reasonable compared to Eq. (2) in DPO (Rafailov et al., 2023). Additionally, in §3.1, we provide an intuitive demonstration of the advantages of using Eq. (4) for reward gap calculation. Here, we conduct a more in-depth analysis to illustrate the limitations of Eq. (2). Table 6 presents the results of computing the re- ward gap using Eq. (2) across three different back- bone models. βis set to 1.0 and the base version of these backbones are adopted as the reference mod- els. The dataset used for this evaluation remains consistent with that in §3.1. However, the results indicate that the computed reward gap fails to accu- rately reflect the model’s safety performance across different languages. We attribute this discrepancy to the following three key reasons: (1)Inference-Training Objective Mismatch : The reward formulation in Eq. (2) is derived from the implicit reward used during the training phase, but it does not directly align with the log-likelihood objective that governs inference (Meng et al., 2024). As a result, the reward gap computed with Eq. (2) may not faithfully capture the model’s actual gen- eration behavior, leading to misleading safety per- formance evaluations. (2)Bias in the Reference Model : Ideally, the reference model used for computing the reward gap in a preference-optimized model should be the su- pervised fine-tuned (SFT) model from the previous training stage, rather than the base model (Ouyang et al., 2022; Rafailov et al., 2023). However, model providers do not publicly release this intermediate SFT model, making it difficult to obtain an accurate reference. As a result, using the base model as the reference introduces bias (Hong et al., 2024b; Wu et al., 2024a), further compromising the reliability of Eq. (2) in assessing safety performance. (3)Length Bias Effects : Eq. (2) does not incor- porate length normalization, making it susceptible to biases introduced by response length disparities (Meng et al., 2024; Kim and Seo, 2024). Empir- ically, unsafe responses tend to be longer due to the presence of explicit harmful content, while safe responses are often concise refusals. This discrep- ancy skews the reward gap calculations, causing in- consistencies in cross-linguistic safety evaluations. These limitations collectively suggest that Eq. (2) from DPO (Rafailov et al., 2023) is not a reli- able metric for evaluating safety differences across languages. In contrast, Eq. (4) from SimPO (Meng et al., 2024) mitigates these issues by normalizing the log-likelihood with sequence length, ensuring a more accurate measure of safety performance. B Further Discussion on MPO Here we conduct further explanation of how reward gap of the dominant language RGdinfluences the learning of yt wandyt lfor different languages. Recall that: RGd=1 |ydw|logπref yd w|xd −1 |yd l|logπref yd l|xd , where πrefis a reference policy (or model). This quantity, RGd, is constant with
https://arxiv.org/abs/2505.16869v1
respect to the train- able parameters θ(because it depends on the refer- ence model). However, it plays an important role in shaping how θis learned for yt wandyt l. Target Gap The difference βRGt−RGdappears inside the loss function. Because RGdis subtracted from βRGt, it effectively sets a target “goal” in log probabilities that the model πθshould achieve between the winning (safe) candidate yt wand the losing (unsafe) candidate yt l. Penalty for Reward Signal The term βRGt− RGdpenalizes deviations of βRGtfrom RGd. In- tuitively, if βRGtis not aligned with RGd, the loss increases, thus signaling the training process that πθis not matching the reference gap. Alignment with Reference Behavior Because RGdcomes from πref, one can interpret it as how strongly the reference policy prefers its “winning” candidate yd wover its “losing” candidate yd l. By forcing RGtfromπθto approximate RGd, the train- ing encourages πθto mimic or at least stay con- sistent with that preference structure, though for potentially different xtandyt w,yt l.Effect on yt wandyt lLearning Foryt w: If the reference gap indicates a high preference for a cor- responding “winning” candidate yd w, then during training, the model sees a stronger incentive to in- crease logπθ(yt w|xt)(since that helps match the overall gap). Foryt l: The model similarly sees a signal to decrease logπθ(yt l|xt)(or at least not let it grow too large), in order to keep the difference consistent with RGd. In essence, RGdprovides a reference or target difference in log probabilities that the model πθ tries to match between yt wandyt l. Although it does not directly update θ(because it is constant with respect to θ), it influences the loss landscape and hence indirectly guides how logπθ(yt w|xt)and logπθ(yt l|xt)are learned. C Gradient Analysis of MPO C.1 Deriving the Gradient of MPO Below is a step-by-step derivation of the gradient of the loss function. L(πθ) =E(x,yw,yl)∼Dh βRGt−RGd 2i , (12) where RGt=1 |ytw|logπθ yt w|xt −1 |yt l|logπθ yt l|xt ,(13) and RGd=1 |ydw|logπref yd w|xd −1 |yd l|logπref yd l|xd . (14) Note that RGddoes not depend on θ, whereas RGt depends on θthrough logπθ(·). Rewrite the Loss Function Define the per- sample loss (ignoring the expectation for a mo- ment) as ℓ(θ) = βRGt(θ)−RGd 2. (15) For the gradient derivation, we focus on ℓ(θ). The overall gradient will then be its expectation w.r.t. the data distribution D. Introduce an Intermediate Variable Let z(θ) = βRGt(θ)−RGd. (16) Hence ℓ(θ) = z(θ) 2. (17) IfRGtis a scalar, ∥z(θ)∥2=z(θ)2. (For the vector case, one may treat each component in the same way.) Apply the Chain Rule to ℓ(θ)We have ℓ(θ) = z(θ) 2. (18) Taking the gradient w.r.t. θ, ∇θℓ(θ) =∇θ z(θ) 2(19) = 2z(θ)∇θz(θ). (20) Recalling z(θ) = βRGt(θ)−RGd, (21) and that RGdis a constant w.r.t. θ, we get: ∇θz(θ) = β∇θRGt(θ). (22) Hence, ∇θℓ(θ) = 2h βRGt(θ)−RGdi β∇θRGt(θ).(23) Compute ∇θRGt(θ)By definition, RGt(θ) =1 |ytw|logπθ yt w|xt −1 |yt l|logπθ yt l|xt . (24) Hence, ∇θRGt(θ) =1 |ytw|∇θlogπθ yt w|xt −1 |yt l|∇θlogπθ yt l|xt .(25) Combine the Results Putting it all together, ∇θℓ(θ) = 2h βRGt(θ)−RGdi β1 |ytw|∇θlogπθ(yt
https://arxiv.org/abs/2505.16869v1
w|xt)−1 |yt l|∇θlogπθ(yt l|xt) . (26) The Full Gradient of L(θ)Recall the origi- nal loss is the expectation ofℓ(θ)over samples (x, yw, yl)∼ D. Therefore, ∇θL(θ) = 2 βE(x,yw,yl)∼D" βRGt(θ)−RGd 1 |ytw|∇θlogπθ(yt w|xt)−1 |yt l|∇θlogπθ(yt l|xt)# . (27) This completes the derivation of the gradient w.r.t. the parameters θ. C.2 Analysis Here we explain how RGdinfluences the model’s updates for yt wandyt lfrom the gradient view:Shifts the Gradient Magnitude and Direction The difference (βRGt(θ)−RGd)multiplies the gradient terms that involve logπθ(yt w|xt)and logπθ(yt l|xt). IfβRGtis smaller than RGd, then the difference is negative, which encourages the model to increase RGt(e.g., by increasing the prob- ability of yt wor decreasing the probability of yt l) so that it moves toward or surpasses RGd. IfβRGt is larger than RGd, the difference is positive, so the model is nudged to preserve or even enlarge its current gap, reinforcing the discrimination it has already learned between yt wandyt l. Controls the Drive to Differentiate yt wandyt l Because RGtinvolves logπθ(yt w)andlogπθ(yt l), the difference term (βRGt(θ)−RGd)directly scales how strongly the model updates its param- eters to favor yt woveryt l. A larger RGdessen- tially raises the “bar” the model is trying to clear; a smaller RGdlowers it. D Training Data To ensure that our multilingual preference data generation process remains on-policy, we adopt a structured approach based on well-established prin- ciples in LLM post-training (Dubey et al., 2024). Specifically, for each English harmful prompt in the PKU-SafeRLHF dataset (Ji et al., 2024), we first feed it to the model to generate a refusal re- sponse, which serves as the preferred response. We then pair this generated refusal with the original dispreferred response from the dataset, forming a preference pair. This ensures that the optimization process remains aligned with the model’s actual behavior, avoiding potential inconsistencies that arise from using static preference data (Yuan et al., 2024; Chen et al., 2024; Rosset et al., 2024; Wu et al., 2024b; Guo et al., 2024; Zhou et al., 2025). To extend this preference data to multiple lan- guages, we then translate both the harmful prompts and their paired responses using the Google Trans- late API. This approach allows us to create multi- lingual preference data while preserving the prefer- ence structure of the original dataset. E Benchmark We comprehensively measure the efficacy of our MPO on various multilingual safety benchmarks. •MultiJail (Deng et al., 2024): It carefully gather 315 English harmful queries and man- ually translate them by native speakers into 9 non-English languages, ranging from high- resource to low-resource. •AdvBench-X (Yong et al., 2023): AdvBench is a set of 500 harmful behaviors formulated as instructions. These behaviors range over the same themes as the harmful strings setting, but the adversary’s goal is to find a single attack string that will cause the model to generate any response that attempts to comply with the instruction, and to do so over as many harmful behaviors as possible. The original English version is also translated manually into target languages of different resource levels. •CSRT (Yoo et al., 2024): It synthesizes code-
https://arxiv.org/abs/2505.16869v1
switching red-teaming queries, combining up to 10 languages, and investigate the safety and multilingual understanding of LLMs. We evaluate multilingual safety alignment using the Attack Success Rate (ASR), following the evalua- tion pipeline proposed by Deng et al. (2024), with GPT-4o as the judgment model. The evaluation pro- cess consists of the following steps: (1) Translation to English: Since safety alignment performance needs to be assessed across multiple languages, we first translate the model-generated responses from the target language into English using GPT-4o to ensure consistent evaluation. (2) Three-Class Clas- sification: GPT-4o then classifies each response into one of the following categories: Safe (mean- ingful refusal), Unsafe or Irrelevant. (3) Attack Success Calculation: Responses classified as un- safe or irrelevant are both considered unsuccess- ful refusals and thus counted as successful attacks when calculating ASR. Only safe refusals are con- sidered failed attacks, contributing to a lower ASR (better safety performance). It is essential to highlight that the languages tar- geted for enhancement, as mentioned above, are all within the capability range of GPT-4o, espe- cially given that its official model card (OpenAI, 2024a) emphasizes support for low-resource lan- guages such as Swahili (Sw) and Bengali (Bn). This underscores the validity and reliability of the evaluation approach. F Baseline Methods We compare MPO with other preference optimiza- tion methods listed in Table 7. IPO (Azar et al., 2024) is a theoretically grounded approach that avoids DPO’s assumption that pairwise preferencescan be replaced with pointwise rewards. rDPO (Chowdhury et al., 2024) mitigates the impact of noise on average, making policies trained with this method more robust. CPO (Xu et al., 2024a) lever- ages sequence likelihood as a reward and trains jointly with an SFT objective. KTO (Ethayarajh et al., 2024) learns from non-paired preference data, while ORPO (Hong et al., 2024b) introduces a reference-model-free odds ratio term to directly contrast winning and losing responses with the pol- icy model, training it alongside the SFT objective. R-DPO (Park et al., 2024) modifies DPO by in- corporating an additional regularization term to prevent length exploitation. Finally, SimPO (Meng et al., 2024) normalizes rewards based on response length and enforces a target reward margin, ensur- ing that the reward difference between winning and losing responses meets a predefined threshold. G Implementation Details All training experiments are conducted on eight A100 GPUs using the LLaMA-Factory repository (Zheng et al., 2024b). And our MPO is also imple- ment based on this repo. For distributed training, we leverage the DeepSpeed (Rasley et al., 2020) framework with ZeRo-2 optimization. Initially, we perform preliminary experiments to determine optimal batch sizes from [8, 16, 32] and training epochs from [1, 2, 3]. We observe that a batch size of 8 consistently yields the best performance across all methods, while the optimal number of training epochs varies by method. All models on all three backbones are trained with a maximum sequence length of 2048, and we employ a cosine learning rate schedule with a 10% warmup phase. To further refine performance, we extensively tune key hyperparameters for all baselines, includ- ing the learning
https://arxiv.org/abs/2505.16869v1
rate, training epochs, and method- specific parameters. The learning rate is searched within [3e-7, 4e-7, 5e-7, 6e-7, 1e-6], while training epochs are explored in [1, 2, 3]. Method-specific hyperparameter search spaces are detailed in Table 7. For MPO, βis searched in [1.0, 1.5, 2.0] and we find that 1.0 or 1.5 always exhibit the best results across all three backbones. Table 8 shows MPO’s hyperparameters used under each backbone. H Additional Experimental Results H.1 Results on Qwen2.5 Table 9 demonstrates the performance compari- son of MPO and baselines based on Qwen2.5-7B- Method Objective Hyperparameter DPO (Rafailov et al., 2023) −logσ βlogπθ(yw|x) πref(yw|x)−βlogπθ(yl|x) πref(yl|x) β∈[0.01,0.05,0.1] IPO (Azar et al., 2024) logπθ(yw|x) πref(yw|x)−logπθ(yl|x) πref(yl|x)−1 2τ2 τ∈[0.01,0.1,0.5,1.0] rDPO (Chowdhury et al., 2024)(1−ϵ)L(θ,x,y w,yl)−ϵL(θ,x,y l,yw) 1−2ϵϵ∈[01,0.5] L(θ, x, y l, yw) =−logσ βlogπθ(yw|x) πref(yw|x)−βlogπθ(yl|x) πref(yl|x) β∈[0.01,0.05,0.1] CPO (Xu et al., 2024a) −logσ(βlogπθ(yw|x)−βlogπθ(yl|x))−λlogπθ(yw|x)λ= 1.0, β∈[0.01,0.05,0.1] KTO (Ethayarajh et al., 2024)−λwσ βlogπθ(yw|x) πref(yw|x)−zref +λlσ zref−βlogπθ(yl|x) πref(yl|x) , λ l=λw= 1.0 where zref=E(x,y)∼D[βKL(πθ(y|x)||πref(y|x))] β∈[0.01,0.1,1.0] ORPO (Hong et al., 2024b)−logpθ(yw|x)−λlogσ logpθ(yw|x) 1−pθ(yw|x)−logpθ(yl|x) 1−pθ(yl|x) ,λ∈[0.01,0.1,1.0] where pθ(y|x) = exp 1 |y|logπθ(y|x) R-DPO (Park et al., 2024) −logσ βlogπθ(yw|x) πref(yw|x)−βlogπθ(yl|x) πref(yl|x)+ (α|yw| −α|yl|)α∈[0.05,0.1,0.5,1.0] β∈[0.01,0.05,0.1] SimPO (Meng et al., 2024) −logσ β |yw|logπθ(yw|x)−β |yl|logπθ(yl|x)−γβ∈[2.0,2.5] γ∈[1.0,1.2,1.4,1.6] Table 7: Detailed optimization objectives of current preference learning methods. We carefully tune their specific hyperparameters and list the search space in the right column. Learning Rate Epoch β LLaMA-3.1-8B-Instruct 6e-7 2 1.0 Gemma-2-9b-it 4e-7 2 1.5 Qwen2.5-7B-Instruct 6e-7 2 1.5 Table 8: The hyperparameters in our proposed MPO used for all three backbones. Instruct. we have drawn the following key insights: MPO still exhibits robust and consistent perfor- mance across various benchmarks and maintain multilingual utility. It consistently surpasses all preference learning methods, highlighting its out- standing safety alignment capabilities and scalabil- ity. Tables 10, 11 and 12 presents MPO maintains multilingual utility on MT-Bench, M-MMLU and MGSM, respectively. Multilingual safety alignment depends on foun- dational abilities The improvement of multilin- gual safety performance relies on the foundational multilingual capabilities of the backbone model. Results on Qwen2.5 show that while MPO still achieves significant gains compared to the origi- nal model and baselines, its absolute performance lags behind other two backbones, especially for low-resource languages. This disparity arises from Qwen2.5’s weaker foundational abilities in these languages. More Specifically, as shown in Table 10, Qwen2.5 exhibits weak instruction-following ability in Bn and Sw, frequently generating out-puts unrelated to the input. In our evaluation, such outputs are classified as unsafe. H.2 Evaluation on Multilingual Utility Evaluation Settings We conduct a comprehen- sive evaluation of MPO’s impact on multilingual utility across the following benchmarks. •MT-Bench (Zheng et al., 2023): The dataset is designed for open-ended generation to evaluate a model’s ability to follow multi- turn instructions. In our experimental setup, this benchmark covers English (En), Chinese (Zh), Arabic (Ar), Japanese (Jp), Korean (Ko), Swahili (Sw) and Bengali (Bn). We collect data in English1, Japanese2, Korean3, and Ara- bic4from huggingface, and Chinese5from github. In addition, we use GPT-4o to trans- late the English data into Swahili and Ben- gali, and performed manual proofreading to ensure correctness. The evaluation follows theLLM-as-a-judge approach,
https://arxiv.org/abs/2505.16869v1
where GPT- 4o is prompted to assign a score directly to a single response on a scale of 1 to 10. It is 1https://huggingface.co/datasets/ HuggingFaceH4/mt_bench_prompts 2https://huggingface.co/datasets/shi3z/ MTbenchJapanese 3https://huggingface.co/datasets/StudentLLM/ Korean_MT-Bench_questions 4https://huggingface.co/spaces/QCRI/ mt-bench-ar/tree/main/data/mt_bench_ar 5https://github.com/HIT-SCIR/huozi MultiJail AdvBench-X CSRT En Zh Ko Ar Bn Sw A VG. En Zh Jp Ko Ar Bn Sw A VG. - Qwen2.5 12.70 10.16 15.87 15.87 73.02 98.10 37.62 1.15 1.35 5.96 5.38 6.35 57.58 99.04 25.26 34.60 SFT 10.79 10.79 13.02 13.02 64.76 99.05 35.24 1.35 2.12 5.38 3.46 5.38 48.18 98.46 23.48 34.92 DPO 11.43 10.48 12.38 13.33 69.84 98.73 36.03 1.54 1.35 5.77 3.85 4.81 51.82 98.08 23.89 37.46 IPO 11.43 8.89 13.65 13.02 68.25 99.05 35.72 0.96 1.92 5.96 5.19 5.77 53.93 97.70 24.49 38.10 rDPO 9.84 8.25 15.24 14.92 70.16 98.73 36.19 1.73 1.92 4.23 4.04 6.92 52.78 96.93 24.08 35.56 CPO 13.33 7.94 12.38 13.33 58.92 98.73 34.11 1.15 2.31 5.96 4.23 6.35 50.48 99.04 24.22 34.92 KTO 10.16 9.52 13.65 13.97 66.67 99.05 35.50 2.50 1.54 5.38 4.04 6.73 53.93 98.08 24.60 39.81 ORPO 10.16 10.16 16.19 13.97 67.62 99.37 36.25 2.31 2.50 5.77 3.27 4.62 48.56 98.46 23.64 30.16 R-DPO 10.79 7.62 9.84 14.29 58.41 98.10 33.18 1.73 2.12 5.58 4.62 5.58 56.24 98.85 24.96 38.41 SimPO 11.75 9.52 14.60 13.97 70.79 98.41 36.51 1.35 1.92 5.38 4.23 5.96 49.33 98.85 23.86 31.43 MPO (Ours) 7.30 6.67 8.89 13.02 53.65 92.38 30.32 1.92 0.96 3.27 2.50 3.65 30.33 85.03 18.24 26.35 MPO - En Align 9.52 13.65 13.33 13.97 64.44 98.73 35.61 2.12 3.46 3.27 2.88 5.00 46.64 97.50 22.98 27.94 Table 9: Detailed results of Qwen2.5-7B-Instruct on three multilingual safety benchmarks are presented. The evaluation metric used is the Attack Success Rate (ASR), where lower values indicate better performance. The best results achieved by our method and baselines are highlighted in bold, while the second-best results are underlined. MT-Bench En Zh Jp Ko Ar Bn Sw A VG. LLaMA-3.1 7.31 5.38 4.88 5.22 5.43 3.98 3.98 5.17 SFT 7.31 5.56 4.84 4.94 5.09 4.25 3.72 5.10 DPO 7.44 5.66 5.03 5.49 4.89 4.76 4.16 5.35 IPO 7.31 5.42 4.89 5.12 5.23 4.39 4.08 5.26 rDPO 7.31 5.81 5.31 5.16 5.43 4.44 4.21 5.38 CPO 7.45 5.59 4.98 4.93 5.04 4.16 3.86 5.14 KTO 7.33 5.55 5.02 5.11 5.05 4.39 4.01 5.24 ORPO 7.39 5.41 4.73 5.01 5.36 4.24 3.72 5.12 R-DPO 7.30 5.63 5.21 5.45 5.48 4.80 4.11 5.43 SimPO 7.48 5.48 5.54 5.21 5.59 4.01 4.11 5.35 MPO 7.25 5.32 5.26 5.44 5.38 4.11 4.01 5.25 Gemma-2 7.71 7.07 6.84 6.81 7.06 5.66 6.15 6.76 SFT 7.72 6.86 6.31 6.38 7.08 5.16 5.84 6.48 DPO 7.79 6.88 7.06 6.86 6.98 6.26 6.20 6.86 IPO 7.61 6.86 6.95 6.86 7.07 5.88 6.17 6.77 rDPO 7.57 6.82 6.93 6.57 6.98 6.03 6.23 6.73 CPO 7.73 6.64 6.62 6.56 7.01 5.33 5.98 6.55 KTO 7.63 6.87 7.00 6.69 6.88 6.06 6.04 6.74 ORPO 7.71 6.86 6.78 6.49 7.09 5.22 6.07 6.60 R-DPO 7.77 7.11 7.09 7.33 6.87 5.33 6.11 6.80 SimPO 7.47 6.89 7.00 6.78 6.95 5.84 6.16 6.73 MPO 7.83 6.81 7.07
https://arxiv.org/abs/2505.16869v1
6.88 7.05 5.78 6.16 6.80 Qwen2.5 7.77 7.36 6.43 6.21 6.60 4.49 2.37 5.89 SFT 7.49 7.14 6.73 6.50 6.46 4.66 2.08 5.87 DPO 7.68 6.99 6.79 6.61 6.50 4.71 2.24 5.93 IPO 7.66 7.24 6.86 6.29 6.61 4.68 1.98 5.90 rDPO 7.78 7.01 6.58 6.38 6.59 4.48 2.10 5.85 CPO 7.61 7.16 6.82 6.33 6.57 4.54 2.12 5.88 KTO 7.77 7.03 6.82 6.60 6.64 4.71 2.18 5.96 ORPO 7.52 7.18 6.58 6.43 6.78 4.53 2.09 5.87 R-DPO 7.61 7.23 6.90 6.33 6.56 4.87 2.20 5.96 SimPO 7.57 7.07 6.73 6.29 6.68 4.77 2.08 5.88 MPO 7.77 7.39 6.71 6.19 6.56 4.43 2.28 5.90 Table 10: Results on MT-Bench across three backbones.M-MMLU En Zh Jp Ko Ar Bn Sw A VG. LLaMA-3.1 67.70 51.30 47.90 43.30 47.60 41.40 40.60 48.54 SFT 67.30 50.90 47.70 47.00 42.60 40.60 39.20 47.90 DPO 67.10 51.50 48.40 47.30 41.80 39.80 38.20 47.73 IPO 67.30 51.60 48.30 47.90 43.00 41.30 39.60 48.43 rDPO 67.20 50.80 47.90 47.30 42.80 40.00 40.50 48.07 CPO 67.40 51.60 47.60 47.50 42.50 40.00 39.80 48.06 KTO 67.10 51.40 48.50 47.60 41.90 40.40 40.40 48.19 ORPO 67.30 51.00 48.10 47.30 42.10 41.30 38.90 48.00 R-DPO 66.90 51.50 48.10 47.70 43.30 40.70 40.20 48.34 SimPO 66.70 51.40 47.70 47.80 43.00 40.90 40.40 48.27 MPO 67.10 50.70 48.40 42.40 47.70 40.10 38.70 47.87 Gemma-2 73.40 61.20 59.40 53.80 59.10 49.90 52.40 58.45 SFT 61.10 50.70 73.30 59.40 59.40 55.40 52.50 58.83 DPO 61.20 49.80 73.40 58.70 59.20 53.90 52.20 58.34 IPO 61.30 49.70 73.30 59.30 59.50 53.60 52.40 58.44 rDPO 61.00 50.20 73.30 59.10 59.40 54.20 52.50 58.53 CPO 61.40 50.80 73.30 59.30 59.60 55.10 52.60 58.87 KTO 61.40 49.90 73.40 59.20 59.50 53.90 52.40 58.53 ORPO 61.70 50.40 73.20 59.30 59.70 55.20 52.90 58.91 R-DPO 60.80 49.00 73.30 59.10 59.20 54.00 51.40 58.11 SimPO 61.30 49.90 73.30 59.20 59.40 53.90 52.30 58.47 MPO 73.40 61.20 58.90 54.00 59.50 49.80 52.10 58.41 Qwen2.5 72.50 63.90 57.70 49.70 56.60 43.20 31.50 53.59 SFT 64.10 43.10 72.60 56.70 57.80 49.30 31.30 53.56 DPO 64.00 43.10 72.50 56.70 57.70 49.50 31.40 53.56 IPO 64.20 43.10 72.20 57.00 57.30 49.70 31.40 53.56 rDPO 64.00 43.40 72.40 56.60 57.70 49.70 31.40 53.60 CPO 64.40 42.60 72.70 57.00 57.90 49.10 31.60 53.61 KTO 64.00 43.20 72.50 56.70 57.70 49.80 31.40 53.61 ORPO 64.50 43.00 72.70 57.10 57.80 50.40 31.50 53.86 R-DPO 64.20 43.30 72.50 56.50 57.60 49.60 31.40 53.59 SimPO 64.10 43.10 72.20 56.80 57.70 50.00 31.40 53.61 MPO 72.20 64.50 57.30 50.50 57.10 43.20 31.60 53.77 Table 11: Results on M-MMLU across three backbones. MGSM En Zh Bn Sw A VG. LLaMA-3.1 88.00 67.20 12.40 40.80 52.10 SFT 86.80 69.60 13.60 54.00 56.00 DPO 85.60 68.80 15.60 45.20 53.80 IPO 85.60 68.80 16.00 46.80 54.30 rDPO 86.80 71.20 13.60 39.60 52.80 CPO 88.80 73.60 11.20 43.20 54.20 KTO 84.80 67.60 16.80 48.40 54.40 ORPO 86.80 69.60 14.80 55.20 56.60 R-DPO 86.80 71.20 13.60 39.60 52.80 SimPO 86.00 72.40 11.60 46.40 54.10 MPO 88.00 68.40 12.00 53.60 55.50 Gemma-2 90.00 77.60 66.00 75.20 77.20 SFT 89.60 79.20 43.60 63.60 69.00 DPO
https://arxiv.org/abs/2505.16869v1
89.20 78.80 67.60 73.60 77.30 IPO 90.00 78.40 67.20 75.20 77.70 rDPO 90.40 78.00 65.60 76.80 77.70 CPO 90.40 79.20 46.00 68.00 70.90 KTO 90.00 77.60 67.60 75.20 77.60 ORPO 90.40 80.00 49.60 65.20 71.30 R-DPO 90.00 77.60 75.20 75.60 79.60 SimPO 90.00 76.80 67.20 75.20 77.30 MPO 90.80 80.40 70.00 74.00 78.80 Qwen2.5 87.20 82.00 35.20 6.40 52.70 SFT 87.60 82.00 35.60 8.40 53.40 DPO 87.60 82.00 38.00 7.20 53.70 IPO 88.00 84.40 36.40 7.60 54.10 rDPO 88.00 82.40 36.00 8.00 53.60 CPO 88.80 82.40 34.00 9.20 53.60 KTO 87.60 82.40 36.40 7.60 53.50 ORPO 88.40 82.40 24.00 6.80 50.40 R-DPO 87.20 82.40 36.40 7.20 53.30 SimPO 88.40 84.00 38.00 9.20 54.90 MPO 88.40 82.80 34.40 6.80 53.10 Table 12: Results on MGSM across three backbones.essential to highlight that the languages tar- geted for enhancement, as mentioned above, are all within the capability range of GPT-4o, especially given that its official model card (OpenAI, 2024a) emphasizes support for low- resource languages such as Swahili (Sw) and Bengali (Bn). This underscores the validity and reliability of the evaluation approach. •M-MMLU (Hendrycks et al., 2021):6The MMLU is a widely recognized benchmark of general knowledge attained by AI models. It covers a broad range of topics from 57 dif- ferent categories, covering elementary-level knowledge up to advanced professional sub- jects like law, physics, history, and computer science. OpenAI translated the MMLU’s test set into 14 languages using professional hu- man translators. Relying on human transla- tors for this evaluation increases confidence in the accuracy of the translations, especially for low-resource languages. In our experimen- tal setup, we adopt the 5-shot evaluation and this benchmark covers English (En), Chinese (Zh), Japanese (Jp), Arabic (Ar), Korean (Ko), Swahili (Sw) and Bengali (Bn). •MGSM (Shi et al., 2023):7Multilingual Grade School Math Benchmark (MGSM) is a benchmark of grade-school math problems. The same 250 problems from GSM8K (Cobbe et al., 2021) are each translated via human an- notators in 10 languages. The dataset was cre- ated to support the task of question answering on basic mathematical problems that require multi-step reasoning. In our experimental setup, we performance evaluation via 0-shot CoT (Wei et al., 2022) and this benchmark covers English (En), Chinese (Zh), Swahili (Sw) and Bengali (Bn). Detailed Results Here, we demonstrate the de- tailed results and the comparison with baseline methods. For MT-Bench, results on LLaMA- 3.1-8B-Instruct, Gemma-2-9B-it and Qwen2.5-7B- Instruct are shown in Table 10. For M-MMLU, results are shown in Table 11. For MGSM, results are shown in Table 12. The results across three backbones show that MPO consistently maintains 6https://huggingface.co/datasets/openai/MMMLU 7https://huggingface.co/datasets/juletxara/ mgsm the general utility of both the dominant and tar- get languages, demonstrating its effectiveness in achieving multilingual safety alignment without compromising the model’s multilingual utility. Current preference learning methods typically in- corporate a KL constraint during training to prevent the model from deviating too far from its original state, ensuring that multilingual general utility is well preserved. As a result, these methods maintain multilingual capabilities comparable to the original model, even after alignment. Under the same achievement of preserving mul- tilingual
https://arxiv.org/abs/2505.16869v1
general utility, MPO achieves significantly superior multilingual safety performance compared to these methods. By leveraging reward gap min- imization with the dominant language as a high- quality supervision signal, MPO effectively trans- fers safety alignment across languages without de- grading the model’s overall linguistic competence. This highlights its advantage in balancing multi- lingual safety and utility, making it a more effec- tive approach for multilingual safety alignment in LLMs. And the comparison of KL constraint and the representation constraint used in MPO is further discussed in Appendix H.3. H.3 Ablation Study Fixed Constants as the Supervision Signal Ta- ble 13 presents the detailed results of multilingual safety performance and general utility of the model when replacing the dominant language reward gap with a fixed value ranging from 0.1 to 20. Notably, 1.58 corresponds to the average reward gap of dom- inant language samples in the training set. As the constant increases, multilingual safety performance steadily improves, even exceeding the performance of models aligned using the actual dominant lan- guage reward gap. However, this improvement comes at a significant cost to multilingual general utility, as excessive alignment strength induces sub- stantial parameter shifts, leading to model collapse despite the application of a retention constraint. Additionally, setting the constant to 1.58 yields only limited improvements, suggesting that fine- grained supervision at the sample level is superior to coarse-grained dataset-level alignment. Reward Gap of Other Languages as the Super- vision Signal Table 5 further demonstrates that using the reward gap of a target language as the alignment objective fails to yield meaningful safety improvements. When selecting the second-bestsafety-performing language (Arabic) or even low- resource languages (Swahili, Bengali), no effective multilingual safety enhancement is observed. This reinforces that the dominant language’s reward gap provides a more reliable and high-quality align- ment supervision signal. Comparison with Cross-Lingual Transfer Meth- ods Cross-lingual transfer methods posit that skills acquired in one source language can be effec- tively transferred to other languages (Huang et al., 2023; Ranaldi et al., 2023; Qin et al., 2023; Etxaniz et al., 2024). This has been achieved through two main approaches: aligning multilingual represen- tations with the activation space of LLMs: CLA (Li et al., 2024a) and LENS(Zhao et al., 2024a), or distilling knowledge from the dominant language: SDRRL (Zhang et al., 2024b). The details of these recent advancements are as follows: •CLA : It aligns internal sentence representa- tions across languages through multilingual contrastive learning and ensures output align- ment by adhering to cross-lingual instructions in the target language. •LENS: It enhances multilingual capabilities by leveraging LLMs’ internal language rep- resentation spaces. LENS operates on two subspaces: the language-agnostic subspace, where it aligns target languages with the cen- tral language to inherit strong semantic rep- resentations, and the language-specific sub- space, where it separates target and central languages to preserve linguistic specificity. •SDRRL : It leverages self-distillation from resource-rich languages to effectively enhance multilingual performance through the use of self-distilled data. Table 14 demonstrates that MPO consistently out- performs these methods, maintaining strong multi- lingual safety alignment. This further highlights the advantage of leveraging
https://arxiv.org/abs/2505.16869v1
the dominant language’s reward gap as a fine-grained supervision signal. Un- like MPO, which explicitly minimizes the reward gap difference between the dominant language and target languages, existing cross-lingual transfer ap- proaches struggle with noisy preference signals and suboptimal knowledge transfer. Additionally, they often exhibit performance degradation in low- resource languages, where data scarcity amplifies alignment instability. MultiJail MT-Bench En Zh Ko Ar Bn Sw A VG. En Zh Jp Ko Ar Bn Sw A VG. LLaMA-3.1 14.60 20.32 52.38 16.83 49.52 37.78 31.91 7.31 5.38 4.88 5.22 5.43 3.98 3.98 5.17 Constant 0.1 25.08 46.67 61.90 63.38 59.37 85.08 56.91 7.13 5.18 4.72 4.47 4.67 4.16 3.39 4.82 Constant 0.5 10.16 8.89 26.98 14.29 21.59 45.08 21.17 7.34 5.49 4.77 5.23 5.04 4.08 3.68 5.09 Constant 1.0 2.86 1.59 11.11 1.90 7.30 18.41 7.20 7.12 5.26 4.71 5.09 5.08 3.97 3.71 4.99 Constant 1.58 2.86 1.27 6.03 0.32 6.35 13.97 5.13 7.26 5.16 5.16 5.18 5.02 4.27 4.14 5.17 Constant 2.0 0.32 0.00 0.63 0.00 5.71 6.03 2.12 7.04 4.43 4.01 5.26 3.43 3.61 3.54 4.47 Constant 5.0 0.32 0.00 0.63 1.90 7.30 9.21 3.23 7.34 4.81 4.53 5.22 3.88 4.00 3.02 4.69 Constant 10.0 0.32 0.32 0.95 0.63 6.35 6.03 2.43 6.88 4.16 3.64 4.20 3.06 3.63 3.58 4.16 Constant 20.0 0.32 0.32 0.63 0.63 14.60 3.81 3.39 7.21 4.94 4.31 5.16 4.03 3.26 3.61 4.65 MPO (Ours) 2.22 0.95 4.76 1.90 12.38 10.79 5.98 7.25 5.32 5.26 5.44 5.38 4.11 4.01 5.25 Table 13: Results of the multilingual safety performance and general utility of the model when replacing the dominant language reward gap with a fixed value ranging from 0.1 to 20. The best results are highlighted in bold. MultiJail En Zh Ko Ar Bn Sw A VG. LLaMA-3.1 14.60 20.32 52.38 16.83 49.52 37.78 31.91 SDRRL 4.76 4.13 18.41 3.81 21.90 21.59 12.43 CLA 11.75 15.24 43.49 12.87 42.54 55.56 30.24 LENS 16.09 60.95 55.24 25.08 60.95 80.32 49.77 MPO 2.22 0.95 4.76 1.90 12.38 10.79 5.98 Table 14: Comparison with cross-lingual transfer method on MultiJail. The evaluation metric used is the Attack Success Rate (ASR), where lower values indicate better performance. The best results are high- lighted in bold. Effect of Constant Reward Gap from the Domi- ant Language We consider an alternative design of MPO—namely, computing the dominant lan- guage reward gap using the policy model instead of the reference model (MPO-Policy). We implement MPO - Policy on all three backbones, LLaMA-3.1, Gemma-2 and Qwen2.5, where the dominant reward gap is computed using the policy model during training. We performed an equivalent hyperparameter search over learning rates [3e-7, 4e-7, 5e-7, 6e-7], training epochs [1, 2, 3], and values [1.0, 1.5, 2.0], and report the best- performing configuration. The results, shown in the Table 15, indicate that MPO - Policy consis- tently underperforms compared to our proposed method across all three backbones, both in terms of safety alignment (as measured by MultiJail, the lower score is better) and general capabilities (as measured by MT-Bench, the higher score is bet- ter). This empirically supports our choice to use
https://arxiv.org/abs/2505.16869v1
the reference-based reward gap. We further provide theoretical justification. When computing RGdusing the current model πθ instead of the reference model πref, The gradientof this loss function becomes: ∇θL1 = 2 βEDh (β·RGt−RGd)· 1 |ytw|∇θlogπθ(yt w|xt)−1 |yt l|∇θlogπθ(yt l|xt) − 1 |ydw|∇θlogπθ(yd w|xd)−1 |yd l|∇θlogπθ(yd l|xd)i (28) By comparing the gradients of the two formula- tions, Equation 27 and Equation 28, obtaining dom- inant reward gap from the policy model has the following drawbacks: •Optimization instability : the direction of the gradient can fluctuate significantly as both sides of the loss are functions of θ. •Lack of anchoring : without a stable refer- ence, the loss can converge to trivial solutions where both RGtandRGdcollapse toward zero, rather than aligning their structure. In summary, our current design using reward gap from reference model not only improves training stability but also provides a clearer learning signal, enabling more reliable cross-lingual safety align- ment. This design choice is well-justified, as the dominant language in the original model typically exhibits the strongest safety alignment H.4 Impact of Data Source Recent studies explore the use of LLMs themselves to generate multilingual preference data, rather than relying on external translation tools. Two no- table approaches in this direction are MAPO (She et al., 2024) and LIDR (Yang et al., 2024b). •MAPO constructs multilingual preference data by sampling multiple responses from an MultiJail MT-Bench En Zh Ko Ar Bn Sw A VG. En Zh Jp Ko Ar Bn Sw A VG. LLaMA-3.1 14.60 20.32 52.38 16.83 49.52 37.78 31.91 7.31 5.38 4.88 5.22 5.43 3.98 3.98 5.17 MPO 2.22 0.95 4.76 1.90 12.38 10.79 5.98 7.25 5.32 5.26 5.44 5.38 4.11 4.01 5.25 MPO - Policy 56.51 48.89 70.79 46.35 81.27 84.13 64.66 7.04 5.23 5.03 5.34 5.36 4.11 3.97 5.15 Gemma-2 2.54 9.52 14.61 4.13 20.32 14.60 10.95 7.71 7.07 6.84 6.81 7.06 5.66 6.15 6.76 MPO 0.63 4.76 6.98 3.81 16.51 7.94 6.77 7.83 6.81 7.07 6.88 7.05 5.78 6.16 6.80 MPO - Policy 1.59 5.08 7.30 5.40 17.78 10.83 7.99 7.81 6.77 6.73 6.59 6.84 5.71 6.01 6.64 Qwen-2.5 12.70 10.16 15.87 15.87 73.02 98.10 37.62 7.77 7.36 6.43 6.21 6.60 4.49 2.37 5.89 MPO 7.30 6.67 8.89 13.02 53.65 92.38 30.32 7.77 7.39 6.71 6.19 6.56 4.43 2.28 5.90 MPO - Policy 14.60 10.48 12.70 13.33 60.63 98.73 35.08 7.69 6.83 6.57 6.16 6.48 4.13 2.17 5.72 Table 15: Results of the multilingual safety performance and general utility of the model when replacing the computation of the dominant language reward gap with the policy model itself. MultiJail En Zh Ko Ar Bn Sw A VG. MAPO - DPO 5.40 3.49 15.87 3.17 27.71 42.86 16.42 MAPO - MPO 2.46 1.59 3.17 3.17 9.52 8.25 4.69 LIDR - DPO + NLL 1.90 6.03 18.10 19.37 80.00 81.27 34.45 LIDR - MPO 2.22 2.86 1.90 6.67 8.89 11.12 5.61 Table 16: Results of the multilingual safety performance on MultiJail. The evaluation metric used is the Attack Success Rate (ASR), where lower values indicate better performance. The best results are highlighted in bold. LLM in a given target
https://arxiv.org/abs/2505.16869v1
languages and ranking them based on an alignment score computed via an external translation model, which mea- sures their consistency with the response in the dominant language. The ranked responses form preference pairs that are then optimized using DPO (Rafailov et al., 2023). •LIDR relies on the LLM’s own translation capability to convert English preference data into target languages, followed by DPO opti- mization with an additional NLL loss. While both methods explore multilingual prefer- ence data generation, they do not propose improve- ments to the multilingual preference optimization process itself, instead relying solely on DPO. To evaluate whether MPO remains effective when trained on preference data obtained using these methods, we conduct experiments using MAPO- and LIDR-generated data. As shown in Table 16, MPO consistently achieves the best mul- tilingual safety alignment results across both data sources, demonstrating its robustness to variations in preference data. These results emphasize that while MAPO and LIDR explore multilingual pref- erence data construction, they do not address thefundamental challenges of multilingual preference optimization. MPO, in contrast, not only adapts to different multilingual data sources but also im- proves the optimization process, ensuring stable and effective multilingual safety alignment. H.5 Visualization Analysis To further understand what MPO brings for the multilingual safety alignment of LLMs, as shown in Figure 6, we perform Principal Component Anal- ysis (PCA) to visualize the multilingual represen- tations in the activation spaces. Specifically, the multilingual harmful inputs are sourced from the AdvBench-X dataset. For each input, we append both a corresponding safe response and an unsafe response to visualize the model’s representation. All representations are extracted from the final layer of the model’s output and the backbone model is LLaMA-3.1-8B-Instruct. Notably, in all cases, the boundary between safe and unsafe inputs in English remains consistently clear. However, in the original model (a-c), the dis- tinction between safe and unsafe inputs in the target languages (Zh, Ar, Sw) appears less structured and more entangled. After applying MPO alignment (d- f), the model demonstrates a significantly improved separation of safe and unsafe inputs in the target languages. This indicates that MPO enhances the multilingual safety alignment of the model, allow- ing it to develop clearer decision boundaries in languages beyond English. H.6 Results on Aya-101 We includes results on an explicitly multilingual model Aya-101, providing more valuable empirical insights. The results are summarized in Table 17. 60 40 20 0 20 40 60 8060 40 20 020406080 safe_en safe_zh unsafe_en unsafe_zh(a) The visualization of multilingual rep- resentations for English and Chinese on LLaMA-3.1-8B-Instruct. 60 40 20 0 20 40 60 8060 40 20 020406080safe_en safe_ar unsafe_en unsafe_ar(b) The visualization of multilingual rep- resentations for English and Arabic on LLaMA-3.1-8B-Instruct. 60 40 20 0 20 40 60 8060 40 20 0204060safe_en safe_sw unsafe_en unsafe_sw(c) The visualization of multilingual rep- resentations for English and Swahili on LLaMA-3.1-8B-Instruct. 60 40 20 0 20 40 6060 40 20 0204060safe_en safe_zh unsafe_en unsafe_zh (d) The visualization of multilingual rep- resentations for English and Chinese on LLaMA-3.1-8B-Instruct after multilin- gual safety alignment via MPO. 60 40 20 0 20
https://arxiv.org/abs/2505.16869v1
40 6040 20 020406080 safe_en safe_ar unsafe_en unsafe_ar(e) The visualization of multilingual rep- resentations for English and Arabic on LLaMA-3.1-8B-Instruct after multilin- gual safety alignment via MPO. 60 40 20 0 20 40 6040 20 0204060 safe_en safe_sw unsafe_en unsafe_sw(f) The visualization of multilingual rep- resentations for English and Swahili on LLaMA-3.1-8B-Instruct after multilin- gual safety alignment via MPO. Figure 6: The visualization of multilingual representations for safe and unsafe inputs across different languages. The upper row (a-c) illustrates the representation space of the original LLaMA-3.1-8B-Instruct model for English (En) and three additional languages: Chinese (Zh), Arabic (Ar), and Swahili (Sw). The lower row (d-f) presents the corresponding representation space after applying multilingual safety alignment via MPO. The visualizations highlight the structural changes in the representation space induced by MPO alignment. X-AdvBench En Zh Jp Ko Ar Bn Sw A VG. Aya-101 17.88 20.00 42.69 45.38 20.32 58.08 53.93 36.90 + MPO 8.65 14.04 20.19 25.19 9.81 31.35 39.23 21.21 Table 17: Results of the multilingual safety performance on X-AdvBench. The backbone is Aya-101. The best results are highlighted in bold. Although Aya-101 supports a wide range of lan- guages, its multilingual safety alignment is still limited. After applying our MPO method—with English as the dominant language, given its well- established safety alignment—we observe a sig- nificant improvement in safety performance on X- Advbench across all 7 languages.
https://arxiv.org/abs/2505.16869v1
arXiv:2505.16881v1 [cs.CL] 22 May 2025CASTILLO: Characterizing Response Length Distributions of Large Language Models Daniel F. Perez-Ramirez RISE Computer Science KTH Royal Institute of Technology Stockholm, Sweden daniel.perez@ri.se – dfpr@kth.seDejan Kostic KTH Royal Institute of Technology RISE Computer Science Stockholm, Sweden dmk@kth.se Magnus Boman Karolinska Institutet Solna, Sweden magnus.boman@ki.se Abstract Efficiently managing compute resources for Large Language Model (LLM) in- ference remains challenging due to the inherently stochastic and variable lengths of autoregressive text generation. Accurately estimating response lengths in ad- vance enables proactive resource allocation, yet existing approaches either bias text generation towards certain lengths or rely on assumptions that ignore model- and prompt-specific variability. We introduce CASTILLO, a dataset characterizing re- sponse length distributions across 13 widely-used open-source LLMs evaluated on seven distinct instruction-following corpora. For each ⟨prompt, model ⟩sample pair, we generate 10 independent completions using fixed decoding hyper-parameters, record the token length of each response, and publish summary statistics (mean, std-dev, percentiles), along with the shortest and longest completions, and the exact generation settings. Our analysis reveals significant inter- and intra-model variability in response lengths (even under identical generation settings), as well as model-specific behaviors and occurrences of partial text degeneration in only subsets of responses. CASTILLO enables the development of predictive models for proactive scheduling and provides a systematic framework for analyzing model- specific generation behaviors. We publicly release the dataset and code to foster research at the intersection of generative language modeling and systems. 1 Introduction Large Language Models ( LLM s) have transformed natural language processing by enabling general- purpose language understanding and generation at unprecedented scale [Brown et al., 2020, Raffel et al., 2020, Chowdhery et al., 2023]. Instruction-tuned LLMs are now widely deployed across a range of applications—including question answering, dialogue systems, and code generation—powering production services used by millions of users daily [Ouyang et al., 2022, Chen et al., 2021, Achiam et al., 2023, Team et al., 2023]. However, their computational and memory demands pose significant challenges to scalable and cost-effective inference in production systems. A central bottleneck in LLM serving systems is the difficulty of managing compute and memory resources efficiently, especially under high concurrency and latency constraints. Recent systems research has emphasized memory management of the attention Key-Value (KV) cache [Kwon et al., 2023] and reactive scheduling strategies that respond to runtime demand fluctuations [Duan Preprint. p1 p2 p3 p4 p5 p6 p7 p8 Randomly selected prompt (p) from Dolly dataset0200400600800100012001400Response Length [tokens] p1 p2 p3 p4 p5 p6 p7 p8 Randomly selected prompt (p) from Apps dataset gemma-4B llama-1B llama-3Bllama-70B llama-8B phi4Figure 1: LLM response lengths exhibit significant inter- and intra-model variability . Points depict the mean response length for a 〈prompt, model 〉pair; shaded regions depict ±1 standard deviation over 10 independent responses under fixed decoding parameters. Significant variation both among different models and within the 10 sampled responses, even within the same model. et al., 2024, Patel et al., 2024, Agrawal et al., 2024]. While effective to some degree, reactive methods are inherently limited by their inability to anticipate the variable and stochastic nature of autoregressive generation in
https://arxiv.org/abs/2505.16881v1
transformer-based models [Vaswani et al., 2017, Holtzman et al., 2020]. This unpredictability leads to inefficient resource utilization and increased operational overhead. Proactive scheduling strategies, with the aim to predict response lengths before generation, have the potential to significantly improve LLM s inference efficiency. Existing approaches often rely on strong assumptions, such as fixed-length outputs across prompts or models [Jin et al., 2023, Shahout et al., 2025], or introduce unwanted bias by implicitly conditioning the generation toward target lengths [Zheng et al., 2023]. Figure 1 highlights the substantial variability in response lengths across both prompts and models. Even under fixed decoding parameters, we observe that no model consistently produces longer or shorter outputs—one model may respond concisely to a particular prompt but verbosely to another. Even when fixing the model and prompt, we see cases of high variance among responses that can extend for hundreds of tokens for some models (see Apps-p3 in Figure 1). This unpredictability underscores the limitations of using only the text prompt alone to estimate response lengths. In this work, we present CASTILLO –CharActerizing re Sponse leng Th dIstributions in Large Language m Odels–a large-scale dataset designed to empirically characterize LLM response length distributions across diverse models and tasks. CASTILLO spans 13 widely-used open-source LLM s and 7 instruction-following datasets, covering a diverse range of model architectures, sizes, and text corpora. For each 〈prompt, model 〉sample pair, we generate 10 independent completions using fixed decoding hyper-parameters and record the token length of each response. We release summary statistics (mean, standard deviation, percentiles), along with the shortest and longest completions and the exact generation settings. Our analysis reveals several key insights: 1.Inter-model variability : Different models exhibit markedly different response length distribu- tions, even when prompted with identical inputs (see Figures 3 and 4). This underscores the need for model-specific characterization of generation behavior. 2.Intra-model variability : within a single model, we observe high variance in response lengths across different prompts (see Figure 5), even with fixed decoding settings, highlighting the influence of prompt semantics on generation length. 3.Partial text degeneration : In some cases, a subset of the 10 completions for a given prompt- model pair exhibit text degeneration artifacts (e.g., repetition or incoherence), suggesting localized instability not captured by single-sample evaluations (see Figure 6). CASTILLO serves as both a benchmarking resource and a foundation for building predictive models of response length. Such models could inform proactive scheduling strategies, reduce latency, and optimize resource allocation in LLM serving infrastructures. Additionally, CASTILLO provides a controlled framework for systematically analyzing and comparing generation behaviors across differ- ent models, prompts, and decoding settings. We release the dataset and codebase (see Appendix A) to support reproducibility and foster future work at the intersection of machine learning and systems. 2 Who painted Guernica?Who Painted GuernicaLLM Iteration 1 Tokenized inputLLM Iteration 2 TheKV- CacheInput PromptLLM Iteration 3 artistKV- CacheLLM Iteration 4 PabloKV- CacheLLM Iteration 5 PicassoKV- Cache - Compute Intensive - High parallelism - Fast (~milliseconds)- Memory Intensive - Low parallelism - Slow (seconds)Prefill Phase Decode PhaseEOS Predict number of iterations (tokens)Proactive compute and memory
https://arxiv.org/abs/2505.16881v1
allocation Figure 2: Transformer-based LLM inference in production is divided into a fast and compute intensive phase (prefill) and a time-consuming and memory intensive phase (decode). 2 Background and Related Work To our knowledge, CASTILLO is the first publicly-available dataset to empirically characterize response length distributions across multiple LLMs and instruction-following datasets. Transformer-based models Modern LLM s are predominantly based on the Transformer architec- ture [Vaswani et al., 2017], which models text as sequences of discrete tokens and generates outputs through an autoregressive process–predicting each token conditioned on the preceding context. This token-by-token generation is inherently stochastic and influenced by both the model parameters and sampling hyperparameters (such as temperature) and decoding configuration [Holtzman et al., 2020]. In this work, we focus exclusively on transformer-based LLM s, given their ubiquity and dominance in both academic and industrial deployments. Efficient LLM inference In production systems, LLM inference can be decomposed into two distinct phases: a prefill phase and a decoding phase [Agrawal et al., 2023, Patel et al., 2024, Agrawal et al., 2024]. Figure 2 depicts the process: the prefill phase encodes the input prompt and computes the KV-Cache for the attention layers–a compute-intensive parallelizable operation that typically executes in the millisecond range. The decoding phase then generates tokens autoregressively, both using and augmenting the KV-Cache, incurring sequential dependencies and high memory overhead. This phase often dominates latency, stretching into seconds or even minutes for long responses. Meanwhile, scheduling decisions–such as request admission or GPU placement–typically occur in the microsecond range, creating a critical need for accurate, low-latency predictions of output length to enable proactive scheduling. Recent work aims at improving LLM inference efficiency through systems-level innovations. Methods such as PagedAttention [Kwon et al., 2023] and FlashAttention [Dao et al., 2022] address memory locality and computation speed during attention computation. Inference engines like vLLM [Kwon et al., 2023], SplitWise [Patel et al., 2024], Sarathi [Agrawal et al., 2024], Flexgen [Sheng et al., 2023], and MuxServe [Duan et al., 2024] implement dynamic batching, cache eviction policies, and token-level scheduling. CASTILLO is orthogonal to these efforts: our work provides a standardized dataset to inform and augment such systems via better response length estimation. Response Length Prediction Recent works attempt to directly predict response lengths in advance. Zheng et al. [2023] propose to fine-tune LLM s with an auxiliary task that predicts the number of output words before generating a response. However, this method introduces length biases and was evaluated on only a single dataset, limiting generality. Jin et al. [2023] introduce S3, a classification- based predictor using DistilBERT to assign each input prompt to one of 10 output-length buckets. While efficient, S3ignores intra-model variability (each 〈prompt, model 〉pair has a single estimated length) and assumes inter-model homogeneity by using the same predictor for all LLMs. Moreover, the use of DistilBERT constrains prompt length to 512 tokens, and the fixed bucket resolution may be too coarse for fine-grained scheduling. Building on S3, Shahout et al. [2025] propose TRAIL, which refines predictions during decoding by conditioning on specific intermediate activations from the LLM and updating estimates via
https://arxiv.org/abs/2505.16881v1
a Bayesian model. While effective, TRAIL introduces communication overhead and requires in-depth layer-wise model-specific analysis, which limits its generalization. 3 CASTILLO contributes to this growing area by offering a large-scale dataset that captures both inter- and intra-model response variability under fixed decoding conditions. By releasing response statistics (mean, percentiles, extremes), generation hyperparameters, and evidence of partial degeneration, our dataset enables systematic study and benchmarking of predictive length estimation methods, and serves as a robust testbed for building more general and proactive inference schedulers. 3 CASTILLO Dataset Construction This section details the construction of CASTILLO, including source dataset selection, open-source LLM s used, and the response generation framework. Our dataset is designed to enable empirical study of response length variability and support the development of predictor models for proactive inference scheduling. 3.1 Source Dataset Selection To capture a diverse range of instruction-following behaviors, we consider seven publicly available and widely used open-source datasets spanning both general-purpose NLP tasks and code genera- tion benchmarks. Our selection includes open-ended instruction datasets— Dolly ,ShareGPT , and Alpaca —as well as structured problem-solving and code-oriented datasets such as Mbpp ,Apps , DS-1000 , and BigCodeBench . All selected datasets have seen prior adoption in peer-reviewed machine learning or systems research and are representative of real-world LLMworkloads. Each dataset is processed independently. When a dataset contains more than 2000 samples, we randomly downsample to a uniform cap of 2000 prompts. For datasets with internal categorization (e.g., by task type or domain), we apply stratified sampling to maintain representative distributions. Importantly, we verify that the downsampled subsets preserve the original distributions of input token lengths, as shown in Appendix B. The resulting prompts are partitioned into training, validation, and test splits using a 70/20/10 ratio, with optional stratification where applicable. The training and validation sets are intended for the development of response length predictors, while the test set is reserved for system-level evaluation. Table 1 presents an overview of the seven datasets considered, including token-level statistics of the input prompts after tokenization using the llama-3.2-1B tokenizer. We report the mean, minimum, percentiles (P25, P50, P75, P99), and maximum token lengths. These statistics highlight the high diversity in prompt lengths across datasets–from compact, instruction-like inputs in Alpaca and Dolly, to longer, more verbose examples in Apps and DS-1000—underscoring the importance of considering a wide range of source datasets for building characterizing response lengths. 3.2 LLM Model Selection To explore variability across models and architectures, CASTILLO includes 13 open-source instruction-tuned LLM s from four major organizations. We prioritize models that are (1) pub- licly available on Hugging Face, (2) used in recent systems and ML research focused on inference optimization, and (3) realistically deployable by research labs and Small- and Medium Enterpises (SMEs), particularly in the mid-size parameter range. Our selection includes models from the LLaMA, Mistral, Qwen, Phi, and Gemma families, ranging in size from 1B to 70B parameters. By including multiple sizes within families, we allow for both intra-family and inter-family comparisons of response behavior. We query each model using its instruction-tuned variant and standardized HuggingFace model configuration. Appendix
https://arxiv.org/abs/2505.16881v1
C (Table 2) lists the specific model identifiers, their parameter sizes, organizations, and the number of H100 GPUs used for inference. 3.3 Response Generation Process We implement dataset-specific prompt formatting routines to respect the schema and intended use cases of each source dataset. These routines generate consistent and model-compatible chat prompts, as detailed in the subsections of Appendix B. To ensure compatibility with all models and control GPU memory usage, we cap input prompt lengths at 2500 tokens. This threshold was selected based on analysis of the input token distributions in Table 1, and strikes a balance between empirical coverage and feasibility of computing response 4 Table 1: Source datasets used by CASTILLO to generate input prompts for the LLM s, with sample counts and token length statistics after tokenization with Llama-3.2-1B. "PX" denotes the Xth percentile. More details about source dataset analysis and processing in Appendix B. Name Samples Mean Min. P25 P50 P75 P99 Max. DollyDataset [Conover et al., 2023] 2000 125.9 36 44 50 146 795.2 4003 ShareGPT [Chiang et al., 2023] 2000 260.5 36 48 64 168 2534.0 2534 Alpaca [Taori et al., 2023] 2000 53.7 39 45 49 57 114.0 397 Mbpp [Austin et al., 2021] 974 153.5 88 109 131 173 336.3 2265 Apps [Hendrycks et al., 2021] 2000 545.0 87 307.7 441 650 2105.0 2534 DS-1000 [Lai et al., 2023] 1000 317.2 67 170.5 283 395 1018.3 2109 BigCodeBench [Zhuo et al., 2025] 1140 179.8 87 137 164 205 398.4 1251 lengths with our available resources. Notably, this limit is significantly larger than the 99th percentile token length across all datasets and avoids out-of-memory failures while preserving most of the input space relevant for inference. For each ⟨prompt, model ⟩pair, we first transform the text prompt into a chat_template using the respective model’s tokenizer, we construct a batch of 10 identical prompt entries, and perform independent decoding for each using Hugging Face’s Transformers interface [Wolf et al., 2020]. Gen- eration is performed with do_sample=True , and the decoding parameters ( temperature ,top-k , top-p ) are recorded and stored for each sample. Our codebase (see Appendix A) is designed to be extensible, allowing users to augment the dataset with additional models and decoding configurations by specifying them at runtime. To guard against resource exhaustion due to pathological completions (repeated test generation in loop), we cap the maximum output length at 15000 per response (approxi- mately 11000 words). This decision was informed by early observations of text degeneration, where some LLM s entered repetitive loops or verbose rambling that consumed the full context window and available GPU memory. After generation, each batch of 10 completions is post-processed to extract the token length of each response. For every ⟨prompt, model ⟩pair, we compute and store the following statistics: i) mean, standard deviation (std-dev), and percentile values (P25, P50, P75, P99) of response lengths, ii) the shortest and longest completions (cached as raw text, and iii) full decoding configuration (temperature ,top-k ,top-p ). More details about the complete dataset schema can be found in Table 3 in the
https://arxiv.org/abs/2505.16881v1
Appendix. 3.4 Sanitizing the CASTILLO Dataset To support downstream analysis and robustness studies, we further partition the dataset into two complementary subsets: 1.Asanitized version , sanitizing or excluding all completions exhibiting degeneration artifacts (e.g., high repetition, loss of coherence, maximum-length responses), and 2.Adegeneration-only subset , containing flagged cases for which at least one of the 10 completions showed evidence of text degeneration. For full details and illustrative examples, see Appendix D. For identifying text degeneration in CASTILLO, we applied a two-stage filtering strategy based on output length statistics. Specifically, we flagged samples as degenerated if they contained any outputs that reached the generation cap of 15,000 tokens, or if they exhibited high variance in response lengths—defined as a standard deviation greater than twice the mean combined with at least one output exceeding 8,500 tokens. This approach enabled us to distinguish true degeneration from benign verbosity and to construct both degenerated and sanitized subsets of the dataset used in our analysis. These two versions allow researchers to isolate text-degeneration behavior or train predictors on clean, high-quality generations. 4 Characterizing LLM Response Distributions In this section, we analyze the response length distributions in CASTILLO to characterize the generation behavior of different LLMs across the considered datasets. Our findings confirm expected 5 Alpaca Apps BigCode DS1000 Dolly Mbpp ShareGPTgemma-12B gemma-4B llama-1B llama-3B llama-70B llama-8B minist-8B mist-7B mixt-8x7B phi4 qwen-14B qwen-32B qwen-7B690.7 264.0 361.9 77.2 635.9 189.1 1251.4 539.6 277.4 328.8 70.1 519.5 234.2 990.3 328.3 507.8 514.0 391.3 286.1 411.0 619.0 295.8 387.1 505.5 359.1 259.7 107.0 535.5 314.2 503.1 529.3 394.0 282.7 247.9 533.9 326.1 503.9 509.7 385.7 283.4 379.1 568.8 312.9 652.3 489.6 434.0 293.9 437.8 560.2 284.8 510.3 428.7 299.4 264.4 357.8 502.0 262.0 573.4 433.2 280.7 255.9 381.6 445.4 221.0 542.5 373.0 416.7 248.0 363.0 527.7 255.7 606.8 514.0 348.6 236.1 374.2 480.2 257.5 583.9 434.0 329.8 242.3 372.9 466.4 298.5 662.4 427.7 345.9 303.9 344.2 598.1 20040060080010001200(a) Average of mean response size. Alpaca Apps BigCode DS1000 Dolly Mbpp ShareGPTgemma-12B gemma-4B llama-1B llama-3B llama-70B llama-8B minist-8B mist-7B mixt-8x7B phi4 qwen-14B qwen-32B qwen-7B74.5 60.4 37.8 21.0 58.3 45.7 138.1 49.8 66.0 31.2 15.2 39.9 85.4 112.0 70.2 146.8 66.8 81.9 55.0 71.9 191.4 43.4 123.9 57.2 69.2 36.9 37.3 106.2 35.3 133.8 61.4 73.2 28.9 27.1 65.6 51.2 117.6 53.5 85.2 42.7 72.8 108.7 69.3 117.3 92.6 78.1 68.3 78.5 123.5 62.1 148.9 75.1 94.5 56.9 81.9 123.1 60.7 210.4 144.9 119.3 53.8 117.1 121.3 72.8 151.2 87.5 93.4 65.7 80.4 165.8 32.6 69.8 52.4 63.5 31.5 36.3 70.5 30.1 63.5 48.0 55.7 29.9 37.6 60.8 47.5 81.6 46.6 38.1 51.5 28.0 115.0 255075100125150175200 (b) Average std-dev of response size. Figure 3: Both the average mean and std-dev across models and dataset can vary by several hundreds of tokens . Heatmaps depict the average of both the mean (3a) and std-dev (3b) of response lengths across models and datasets for all generations performed. trends—such as substantial variability across models and datasets—but also uncover significant variation in response lengths for the same prompt under fixed decoding settings. This highlights the challenges in accurately
https://arxiv.org/abs/2505.16881v1
predicting response lengths. Moreover, our analysis reveals notable occurrences of text degeneration in a subset of generated completions. 4.1 Inter- and Intra-Model Variability Figure 3 presents a heatmap of the average mean (3a) and standard deviation (3b) of response lengths across all ⟨model, dataset ⟩combinations. We observe pronounced disparities in both metrics. Models consistently produce longer responses for certain datasets, notably ShareGPT and Apps [Hendrycks et al., 2021], with mean response lengths varying by several hundred tokens across models. The standard deviation heatmap (Figure 3b) further illustrates intra-model variability, with response length variation ranging from a few dozen tokens (e.g., gemma-4B on DS-1000) to a couple hundreds of tokens (e.g., mixt-8x7B). These findings emphasize the inherent stochasticity of autoregressive generation, even under fixed sampling parameters, and underscore the difficulty of response length prediction. Distributions Across Models and Datasets To further explore the range of variability, Figure 4 presents boxplots of the mean (4a) and standard deviation (4b) of response lengths across all ⟨prompt, model⟩pairs, grouped by dataset. The wide interquartile ranges and extended whiskers illustrate the breadth of generation behavior across LLMs. We observe a consistent pattern in which code-related datasets (Mbpp, Apps, DS-1000, Big- CodeBench) elicit longer responses compared to instruction-tuned text datasets (e.g., Alpaca, Dolly). Interestingly, the Gemma models deviate from this trend, producing longer responses on text-based datasets. This inversion highlights the importance of model-specific tuning and architectural differ- ences in influencing output characteristics. Within-Batch Variability and Coefficient of Variation To quantify the variation within each batch of 10 generated responses per 〈prompt, model 〉pair, we compute the average coefficient of variation (CV = std-dev/mean) per model-dataset combination. Figure 5 shows CV values ranging from 7% to 45%, indicating substantial within-batch response length variation despite fixed decoding parameters for the batch. High intra-batch CV values suggest that LLM outputs can be highly unpredictable for certain models and datasets, particularly for code-oriented datasets like Apps, BigCode and DS-1000, where prompt structure and complexity likely amplify generation variance. This level of unpredictability presents a challenge for inference-time schedulers and systems that rely on deterministic or narrowly distributed response length estimates. 6 DollyDataset DS1000 Mbpp Apps BigCodeBench ShareGPT Alpaca0100200300400500600700800900100011001200130014001500160017001800190020002100Mean Response [# tokens]qwen-14B llama-70Bmist-7B qwen-32Bllama-8B llama-1Bphi4 minist-8Bllama-3B mixt-8x7Bgemma-4B gemma-12Bqwen-7B(a) Mean response length distribution across models and datasets depicts significant inter-model variation in response lengths. Box extents delimit inter-quartile range, whiskers depict 1stand 90stpercentile. DollyDataset DS1000 Mbpp Apps BigCodeBench ShareGPT Alpaca050100150200250300350Std-dev [# tokens]qwen-14B llama-70B mist-7B qwen-32B llama-8Bllama-1B phi4 minist-8B llama-3Bmixt-8x7B gemma-4B gemma-12B qwen-7B (b) Distribution of standard deviation of model response lengths across datasets. Box extents delimit inter- quartile range, whiskers depict 1stand 99stpercentile. Figure 4: LLMs exhibit substantial variation in their distribution of both mean and std-dev of their responses. Figure4a shows large disparities across both models and datasets, even under identical generation settings. gemma-12B gemma-4B llama-1B llama-3B llama-70B llama-8B minist-8B mist-7B mixt-8x7B phi4 qwen-14B qwen-32B qwen-7B051015202530354045Coeff. of Variation [%]Alpaca AppsBigCodeBench DS1000DollyDataset MbppShareGPT Figure 5: High coefficient of variation between responses for the same prompt from an LLM . Figure depicts average coefficients of variation (std-dev normalized by the mean) across datasets and
https://arxiv.org/abs/2505.16881v1
models ranging from 7%to up to 45%. Hence, the std-dev can reach up to half of the value of the average response length in the batch. Alpaca Apps BigCode DS1000 Dolly Mbpp ShareGPTgemma-4B llama-1B llama-3B llama-70B llama-8B mist-7B mixt-8x7B phi4 qwen-14B qwen-32B qwen-7B6 1 52 162 29 39 32 8 247 16 63 10 14 18 2 71 4 26 3 9 3 26 78 16 28 47 22 73 2 2 1 5 1 10 1 13 3 4 16 61 8 15 3 46 2 10 1 1 5 5 1 3 31 3 1 4 1 23 50100150200 (a) Number of samples suffering from test degeneration. llama-1Bllama-8Bllama-3Bphi4 qwen-7Bllama-70Bmixt-8x7Bqwen-14Bmist-7Bqwen-32Bgemma-4B0510152025303540Share within Degenerate Set [%]41.0% 20.9% 14.0% 10.7% 4.5% 3.2%2.3%1.4%0.7% 0.6% 0.5%(b) Share in text degeneration set by model. ShareGPTApps Dolly Alpaca DS1000 BigCodeMbpp05101520253035Share within Degenerate Set [%]34.7% 32.7% 9.2%8.6% 7.6% 4.4% 2.8%(c) Share in text degeneration set by dataset. Figure 6: We identify a total of 956 text degeneration cases across all models and datasets. Figure6a depicts a heatmap of the number of text degeneration cases by model and dataset. Figures 6b and 6c show the share of samples from the total 956 according to the model and dataset, respectively. 7 4.2 Text Degeneration Cases We define text degeneration as cases where generated completions exhibit artifacts such as excessive repetition, incoherence, or abrupt verbosity, often reaching the maximum token limit. Figure 16 and 15 in Appendix D illustrate qualitative examples of such behavior. CASTILLO enables the identification of degeneration cases by generating 10 independent completions per prompt under fixed decoding settings. This design facilitates benchmarking the likelihood of encountering degeneration artifacts across models and datasets. Our degeneration detection pipeline (described in Algorithm 1 in Appendix D) identified a total of 956 affected samples. Figure 6 provides a heatmap of degeneration counts across ⟨model, dataset ⟩ combinations. Of the 13 LLMs evaluated, only two—minist-8B and gemma-12B—exhibited no de- generation cases. In contrast, three models (llama-1B, llama-3B, and llama-8B) showed degeneration across all datasets. A clear trend emerges within model families and their sizes: smaller variants (e.g., qwen-7B, llama-1B) are more prone to degeneration than their larger counterparts. Figures 6b and 6c summarize the distribution of degeneration cases across models and datasets. Notably, llama-1B alone accounts for over 40% of all degeneration cases, which may reflect the model’s limited capacity to regulate generation under open-ended prompting. On the dataset side, ShareGPT and Apps contribute the majority of degeneration cases, likely due to their high diversity and prompt verbosity. These findings support the inclusion of both sanitized and degeneration-only subsets in CASTILLO, enabling researchers to develop robust length predictors and to better understand model-specific vulnerabilities in generation stability. 5 Use Cases for CASTILLO CASTILLO is designed to serve as a flexible and extensible benchmark for analyzing and predicting the output behavior of instruction-tuned LLMs. We outline several key use cases that highlight the dataset’s applicability in both systems and machine learning research: Length Prediction for Proactive Scheduling In latency-sensitive LLM serving systems, the ability to predict output length prior to generation is critical for
https://arxiv.org/abs/2505.16881v1
informed admission control, resource allocation, and scheduling. CASTILLO provides per-sample statistics (mean, std, percentiles) over multiple completions per prompt, which enables the training and evaluation of both regression-based and classification-based output length predictors. These models can be integrated into serving infrastructures to support proactive inference scheduling strategies. Benchmarking Model-Specific Generation Behavior CASTILLO enables researchers to compare how different open-source LLMs respond to the same prompt distribution under identical decod- ing parameters. By including models across diverse architecture families and parameter scales (1B–70B), our dataset supports detailed inter- and intra-model comparisons. This facilitates analyses of length variability, degeneration tendencies, and generation style, which can inform the selection and deployment of models for specific application domains. Degeneration Detection and Filtering The dataset includes both sanitized and degeneration-only subsets, enabling research into degeneration detection, mitigation strategies, and generation stability analysis. Degeneration patterns—such as token repetition, incoherence, or excessive verbosity—are often subtle and model-specific. Our dataset provides the necessary signals (e.g., outlier response lengths, high variance) to develop heuristics or learning-based detectors for pathological generations. Downstream Integration for System Simulations CASTILLO can serve as a simulation backend for system-level studies in LLM inference. For example, system designers can use the statistical length distributions to emulate response lengths in multi-tenant inference clusters, or to test token- level schedulers and memory managers under realistic generation profiles. By decoupling evaluation from online generation, researchers can prototype and benchmark new scheduling policies efficiently. 8 6 Limitations and Future Work CASTILLO offers a comprehensive and empirically grounded foundation for analyzing LLM response length distributions, addressing a crucial gap in current benchmarking and systems design. However, we identify the following limitations and opportunities for future work. Transformer-centric Scope All models used for building our dataset are based on the transformer architecture, reflecting its current dominance in LLM research and deployment. While our findings are broadly applicable, they may not generalize to alternative architectures such as state-space models [Gu et al., 2022, Gu and Dao, 2023] or LSTM-based systems [Beck et al., 2024]. Future iterations of CASTILLO could incorporate such models to assess whether similar length variability patterns emerge beyond the transformer architecture. Extensibility to More Models and Datasets While CASTILLO includes a diverse range of 13 instruction-tuned LLMs and 7 datasets, the long tail of model and data heterogeneity in practice suggests significant room for growth. To support community-driven expansion, we publicly release the dataset and a modular codebase (see Appendix A) that enables seamless addition of new models, datasets, and generation settings. Notably, CASTILLO also supports optional caching of prefill-stage activations (hidden states and logits), opening new avenues for analysis of how input representations correlate with output length. We envision future work could systematically explore this relationship. Chain-of-Thought Specific Extensions Out dataset generation framework can be extended to accommodate reasoning-focused models that follow chain-of-thought framework [Wei et al., 2022, Yao et al., 2023]. In such settings, we propose partitioning the response into two measurable segments: (1) the reasoning trace delimited by <think> tokens and (2) the final answer. This would enable fine- grained prediction of intermediate vs. final response lengths, and
https://arxiv.org/abs/2505.16881v1
support a deeper understanding of how models allocate tokens to reasoning versus conclusions. Given the demonstrated model-specific variation in generative behavior, we hypothesize that reasoning lengths may also reflect distinct architectural or model "personalities". Generation-Configuration Interactions While CASTILLO generates responses assuming the respective model’s default decoding settings, generation configuration remains a powerful lever that modulates output variability. Our dataset includes metadata for temperature, top-k, and top-p settings, and can be extended to systematically study their interactions with both prompt structure and model identity. Future work could incorporate model-specific controlled ablation studies to quantify the marginal effect of each decoding parameter on response length distribution and degeneration likelihood. This process is model-dependent, and outside of our scope of inter-model variability. 7 Conclusion We introduced CASTILLO, a large-scale dataset designed to empirically characterize response length distributions across open-source instruction-tuned LLMs. By systematically measuring response length variability across 13 models and 7 diverse datasets, it provides a rich foundation for building predictive models that support proactive inference scheduling. Beyond its utility for systems research, our dataset enables the investigation of model-specific generation behaviors, including intra-model variance and partial text degeneration phenomena. We release the dataset, codebase, and baseline tools to encourage reproducibility and community-driven extensions, and we envision CASTILLO as a versatile tool for research at the intersection of machine learning and systems. References J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774 , 2023. A. Agrawal, A. Panwar, J. Mohan, N. Kwatra, B. S. Gulavani, and R. Ramjee. Sarathi: Efficient llm inference by piggybacking decodes with chunked prefills, 2023. URL https://arxiv.org/ abs/2308.16369 . 9 A. Agrawal, N. Kedia, A. Panwar, J. Mohan, N. Kwatra, B. Gulavani, A. Tumanov, and R. Ramjee. Taming Throughput-Latency tradeoff in LLM inference with Sarathi-Serve. In 18th USENIX Symposium on Operating Systems Design and Implementation (OSDI 24) , pages 117–134, Santa Clara, CA, July 2024. USENIX Association. ISBN 978-1-939133-40-3. URL https://www. usenix.org/conference/osdi24/presentation/agrawal . J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry, Q. Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732 , 2021. M. Beck, K. Pöppel, M. Spanring, A. Auer, O. Prudnikova, M. K. Kopp, G. Klambauer, J. Brand- stetter, and S. Hochreiter. xLSTM: Extended long short-term memory. In The Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS) , 2024. URL https: //openreview.net/forum?id=ARAxPPIAhq . T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-V oss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems (NeurIPS) , volume 33, pages 1877–1901. Curran As- sociates, Inc., 2020. URL
https://arxiv.org/abs/2505.16881v1
ERROR: type should be string, got "https://proceedings.neurips.cc/paper_files/paper/2020/ file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf . M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y . Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-V oss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V . Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba. Evaluating large language models trained on code. 2021. W.-L. Chiang, Z. Li, Z. Lin, Y . Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y . Zhuang, J. E. Gonzalez, I. Stoica, and E. P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/ . A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y . Tay, N. Shazeer, V . Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V . Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research , 24(240):1–113, 2023. URL http://jmlr.org/papers/v24/22-1144.html . M. Conover, M. Hayes, A. Mathur, J. Xie, J. Wan, S. Shah, A. Ghodsi, P. Wen- dell, M. Zaharia, and R. Xin. Free dolly: Introducing the world’s first truly open instruction-tuned llm, 2023. URL https://www.databricks.com/blog/2023/04/12/ dolly-first-open-commercially-viable-instruction-tuned-llm . T. Dao, D. Fu, S. Ermon, A. Rudra, and C. Ré. Flashattention: Fast and memory-efficient exact atten- tion with io-awareness. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 16344–16359. Cur- ran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/ 2022/file/67d57c32e20fd0a7a302cb81d36e40d5-Paper-Conference.pdf . J. Duan, R. Lu, H. Duanmu, X. Li, X. Zhang, D. Lin, I. Stoica, and H. Zhang. Muxserve: flexible spatial-temporal multiplexing for multiple llm serving. In Proceedings of the 41st International Conference on Machine Learning , ICML’24. JMLR.org, 2024. 10 A. Gu and T. Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752 , 2023. A. Gu, K. Goel, and C. Re. Efficiently modeling long sequences with structured state spaces. In International Conference on Learning Representations , 2022."
https://arxiv.org/abs/2505.16881v1
URL https://openreview.net/ forum?id=uYLFoz1vlAC . D. Hendrycks, S. Basart, S. Kadavath, M. Mazeika, A. Arora, E. Guo, C. Burns, S. Puranik, H. He, D. Song, et al. Measuring coding challenge competence with apps. In 35th Conference on Neural Information Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks , 2021. A. Holtzman, J. Buys, L. Du, M. Forbes, and Y . Choi. The curious case of neural text degeneration. InInternational Conference on Learning Representations , 2020. URL https://openreview. net/forum?id=rygGQyrFvH . Y . Jin, C.-F. Wu, D. Brooks, and G.-Y . Wei. $s^3$: Increasing GPU utilization during generative inference for higher throughput. In Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS) , 2023. URL https://openreview.net/forum?id=zUYfbdNl1m . W. Kwon, Z. Li, S. Zhuang, Y . Sheng, L. Zheng, C. H. Yu, J. E. Gonzalez, H. Zhang, and I. Sto- ica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles , 2023. Y . Lai, C. Li, Y . Wang, T. Zhang, R. Zhong, L. Zettlemoyer, W.-t. Yih, D. Fried, S. Wang, and T. Yu. Ds-1000: A natural and reliable benchmark for data science code generation. In Proc. International Conference on Machine Learning (ICML) , pages 18319–18345. PMLR, 2023. L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. F. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions with human feedback. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems , volume 35, pages 27730–27744. Curran As- sociates, Inc., 2022. URL https://proceedings.neurips.cc/paper_files/paper/2022/ file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf . P. Patel, E. Choukse, C. Zhang, A. Shah, Í. Goiri, S. Maleki, and R. Bianchini. Splitwise: Efficient generative llm inference using phase splitting. In 2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA) , pages 118–132. IEEE, 2024. C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y . Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67, 2020. R. Shahout, eran malach, C. Liu, W. Jiang, M. Yu, and M. Mitzenmacher. DON’t STOP ME NOW: EMBEDDING BASED SCHEDULING FOR LLMS. In The Thirteenth International Confer- ence on Learning Representations (ICLR) , 2025. URL https://openreview.net/forum?id= 7JhGdZvW4T . Y . Sheng, L. Zheng, B. Yuan, Z. Li, M. Ryabinin, B. Chen, P. Liang, C. Ré, I. Stoica, and C. Zhang. Flexgen: High-throughput generative inference of large language models with a single gpu. In International Conference on Machine Learning (ICML) , pages 31094–31116. PMLR, 2023. R. Taori, I. Gulrajani, T. Zhang, Y . Dubois, X. Li, C. Guestrin, P. Liang, and T. B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/ stanford_alpaca , 2023. G. Team, R. Anil, S. Borgeaud, J.-B. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M. Dai, A. Hauth, K. Millican, et al. Gemini: a family
https://arxiv.org/abs/2505.16881v1
of highly capable multimodal models. arXiv preprint arXiv:2312.11805 , 2023. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in neural information processing systems , volume 30, 2017. 11 J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V . Le, D. Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems (NeurIPS) , 35:24824–24837, 2022. T. Wolf, L. Debut, V . Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y . Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush. Huggingface’s transformers: State-of-the-art natural language processing, 2020. URL https://arxiv.org/abs/1910.03771 . S. Yao, D. Yu, J. Zhao, I. Shafran, T. Griffiths, Y . Cao, and K. Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems , 36:11809–11822, 2023. Z. Zheng, X. Ren, F. Xue, Y . Luo, X. Jiang, and Y . You. Response length perception and sequence scheduling: An llm-empowered llm inference pipeline. volume 36, pages 65517–65530, 2023. URLhttps://openreview.net/forum?id=eW233GDOpm . T. Y . Zhuo, V . M. Chien, J. Chim, H. Hu, W. Yu, R. Widyasari, I. N. B. Yusuf, H. Zhan, J. He, I. Paul, S. Brunner, C. GONG, J. Hoang, A. R. Zebaze, X. Hong, W.-D. Li, J. Kad- dour, M. Xu, Z. Zhang, P. Yadav, N. Jain, A. Gu, Z. Cheng, J. Liu, Q. Liu, Z. Wang, D. Lo, B. Hui, N. Muennighoff, D. Fried, X. Du, H. de Vries, and L. V . Werra. Bigcodebench: Benchmarking code generation with diverse function calls and complex instructions. In The Thirteenth International Conference on Learning Representations (ICLR) , 2025. URL https: //openreview.net/forum?id=YrycTjllL0 . A Dataset and Code Availability We make our dataset and code publicly available. We uploaded our dataset to the Hugging- Face platform at https://huggingface.co/datasets/danfperam/castillo under the Cre- ative Commons Attribution License (CC BY 4.0). Moreover, we release the documented code used for generating the dataset, as well the the code required for running the experiments in https://github.com/DanielFPerez/castillo under the Apache 2.0 license. B Analyzing and Processing the Source Datasets We refer to "Source Datasets" as the text corpora that we use as source to generate the prompts to the models. We focus on open source instruction-following data corpora and prioritize those that have been generated by "natural" model-user interactions, and those that focus on code generation. For all text corpora, we split each respective dataset with a 70%-20%-10% training-validation-test set splits. We intend for future works that the train and validation sets serve develop length predictor models. The test set will be a hold-out dataset that will subsequently be used for the scheduler experiments in systems research, in which the output length predictor will be deployed. B.1 Dolly Dataset Dataset presented by Conover et al. [2023] consisting of 15k intruction-following records of Databricks employees with their model.
https://arxiv.org/abs/2505.16881v1
We access the dataset from https://huggingface.co/ datasets/databricks/databricks-dolly-15k . Schema The schema of each sample of the dataset is a dictionary with the following fields: { ’instruction’: ’Given a reference text about Lollapalooza, where does ...’, ’context’: ’Lollapalooza is an annual American festival...’, ’response’: ’Lollapalooze is an annual musical festival held ...’, ’category’: ’closed_qa’ } 12 0 1000 2000 3000 4000 5000 Input tokens0.00.20.40.60.81.0CDFCDF # Input T okens Dolly CDF 0.25 Perc. at 45.0 0.50 Perc. at 51.0 0.75 Perc. at 146.0 0.95 Perc. at 428.0 0.99 Perc. at 903.2 100 1000 Input tokens (log-scale)0.00.20.40.60.81.0CDFCDF per Category Dolly Category brainstorming classification closed_qa creative_writing general_qa information_extraction open_qa summarization 0250050007500100001250015000Num. of Samples 1766213617737092191150637421188Sample Counts(a) DollyDataset: distributions over the complete 15000 samples. 0 1000 2000 3000 4000 Input tokens0.00.20.40.60.81.0CDFCDF # Input T okens Dolly CDF 0.25 Perc. at 44.0 0.50 Perc. at 50.0 0.75 Perc. at 146.0 0.95 Perc. at 407.1 0.99 Perc. at 795.2 100 1000 Input tokens (log-scale)0.00.20.40.60.81.0CDFCDF per Category Dolly Category brainstorming classification closed_qa creative_writing general_qa information_extraction open_qa summarization 0500100015002000Num. of Samples 23528523694292201499158Sample Counts (b) DollyDataset: distribution over the selected 2000 samples. Figure 7: DollyDataset prompt lengths distributions (number of tokens) from both the complete dataset, and from the 2000 randomly stratified sampled instances. The sampled subset shares a similar distribution as the original dataset. Thecontext field is used by some samples as additional information when prompting the model, while the category field contains a label classifying the question into eight cate- gories:brainstorming ,classification ,closed_qa ,creative_writing ,general_qa , information_extraction ,open_qa ,summarization . Figure 7 shows the Cumulative Distribution Function ( CDF ) of the tokenized input for all prompts in the dolly dataset using the Llama-3.2-3B model. The right-hand plot shows also the eCDF of each of the categories. Text prompt We use the following prompt structure for each sample of the DollyDataset: sample[’instruction’] + "\nContext: " + sample[’context’] where the "Context" line is added only for samples that have a context field, and omitted otherwise. Stratified Split The DollyDataset includes a categorical variable indicating the type of question or task (thecategory feature). To maintain consistency and ensure proper statistical data handling, we perform stratified sampling according to the category field both for randomly selecting the initial 2000 samples from the dataset, and also for the subsequent train-validation-test splits. B.2 ShareGPT dataset The ShareGPT dataset is a large, publicly released collection of anonymized user–ChatGPT con- versational exchanges scraped from publicly shared chat links, comprising paired prompts and AI responses for training and evaluating dialogue models [Chiang et al., 2023]. We choose the following huggingface version of the ShareGPT dataset https://huggingface.co/datasets/ anon8231489123/ShareGPT_Vicuna_unfiltered . Schema Thejson file that corresponds to the data is a list of dict elements. Each of these elements has the following structure: { ’id’: ’xd92L6L_48’, ’conversations’: [ {’from’: ’human’, ’value’: ’Tell me a story.’}, {’from’: ’gpt’, ’value’: ’Once upon a time ....’}, {’from’: ’human’, ’value’: ’Make it shorter, ’}, ...] } 13 Dataset cleansing . The original dataset has 94145 samples. First, we remove 550 data samples from the original dataset as we identify that they do not contain any conversations.
https://arxiv.org/abs/2505.16881v1
Moreover, we notice that some original long conversations interactions with ChatGPT were split into multiple data samples, which causes some data samples to start their conversation with the gptinput. We discard those data samples as well, since we do not want to bias the response from the considered LLM s by CASTILLO with ChatGPT responses. Future work could consider caching the results from the CASTILLO models and feeding the continuation of the ShareGPT conversations that start with human . After considering only data samples starting with human , the dataset results in 58827 data samples. Text prompt We use the following prompt structure for each sample of the ShareGPT dataset: sample[’conversations’][0][’value’] 100 1000 10000 100000 Number of T okens05000100001500020000FrequencyHistogram - # Input T okens ShareGPT 100 1000 10000 100000 Number of T okens0.00.20.40.60.81.0Cumulative ProbabilityECDF - # Input T okens ShareGPT 0.25 CDF at 48 0.5 CDF at 66 0.75 CDF at 170 0.95 CDF at 1362 0.99 CDF at 2896 (a) ShareGPT: distributions over the filtered 58827 samples. 100 1000 Number of T okens050100150200250FrequencyHistogram - # Input T okens ShareGPT 100 1000 Number of T okens0.00.20.40.60.81.0Cumulative ProbabilityECDF - # Input T okens ShareGPT 0.25 CDF at 48 0.5 CDF at 64 0.75 CDF at 168 0.95 CDF at 1377 0.99 CDF at 2534 (b) ShareGPT: distribution over the selected 2000 samples. Figure 8: ShareGPT prompt lengths distributions (number of tokens) from both the complete dataset, and from the 2000 randomly sampled instances. The sampled subset shares a similar distribution as the original dataset. Figure 8 shows the distribution of the length of the tokenized user prompts in terms of number of tokens for both the filtered version of the dataset and for the 2000 samples considered for CASTILLO. Bpoth distributions are similar, with closely aligned percentiles values. It shows how 50% of the ShareGPT prompts are less or equal to ∼65 tokens long, and 95% are equal or shorter than 1400. Finally, there are some prompts that are excessively long in the filtered 58k samples (see long tail in Figure 8a), with a maximum length of 161306 tokens, far beyond the context window of most open source LLM s. From the selected 2000 samples, we truncate their length to a maximum of 2500 tokens, given that the 99thpercentile is already at 2500. B.3 Alpaca Dataset The Alpaca dataset presented by Taori et al. [2023] is a collection of 52000 instruction-following demonstrations generated using self-instruct techniques, where prompts are crafted and outputs are synthesized by GPT-3 to fine-tune smaller language models like LLaMA. We choose the following huggingface version of the Alpaca dataset that already performs some cleaning of the original dataset https://huggingface.co/datasets/yahma/alpaca-cleaned . Schema The Alpaca dataset is provided as a list of dict elements, where aech sample of the dataset has the following structure: 14 { ’instruction’: ’Explain why the following fraction is equivalent to 1/4’, ’input’: ’4/16’, ’output’: ’The fraction 4/16 is equivalent ....’ } Theinput field is used by some samples as additional information when prompting the model. Text prompt We use the following prompt structure for each sample
https://arxiv.org/abs/2505.16881v1
of the Alpaca dataset: sample[’instruction’] + "\nInput: " + sample[’context’] Analogous to the DollyDataset, the "Input" line is added only for samples that have a input field, and omitted otherwise. 100 Number of T okens0200040006000800010000FrequencyHistogram - # Input T okens Alpaca 100 Number of T okens0.00.20.40.60.81.0Cumulative ProbabilityECDF - # Input T okens Alpaca 0.25 CDF at 46 0.5 CDF at 49 0.75 CDF at 57 0.95 CDF at 78 0.99 CDF at 121 (a) Alpaca: distributions over the filtered 51760 samples. 100 Number of T okens0100200300FrequencyHistogram - # Input T okens Alpaca 100 Number of T okens0.00.20.40.60.81.0Cumulative ProbabilityECDF - # Input T okens Alpaca 0.25 CDF at 45 0.5 CDF at 49 0.75 CDF at 57 0.95 CDF at 79 0.99 CDF at 114 (b) ShareGPT: distribution over the selected 2000 samples. Figure 9: Alpaca prompt lengths distributions (number of tokens) from both the complete dataset, and from the 2000 randomly sampled instances. The sampled subset shares a similar distribution as the original dataset. Figure 9 shows the distribution of the length of the resulting input prompts using the Llama-3.2-1B tokenizer. This dataset has a shorter tail than the previous considered. The longest input prompt recorded is 630 tokens long, and 99% of the input prompts are shorter than 114 tokens. Moreover, Figure 9b shows a similar distribution as the original 51760 samples form Figure 9a. B.4 Mbpp Dataset The Mbpp (Mostly Basic Python Problems) dataset was originally published by Austin et al. [2021], and we access the dataset from https://github.com/google-research/google-research/ tree/master/mbpp . The original dataset consists of 1000 crowd-sourced python programming problems. We choose to take the subset of the data that is hand-verified by the authors, which they called a sanitized version, comprising of 974 crowd-sourced Python programming tasks designed to be solvable by entry-level programmers. Schema Each sample in the Mbpp dataset has the following structure: {’source_file’: ’Benchmark Questions Verification V2.ipynb’, ’task_id’: 3, ’prompt’: ’Write a python function to identify non-prime numbers.’, ’code’: ’import math\ndef is_not_prime(n) ... ’, 15 100 1000 Number of T okens050100150200FrequencyHistogram - # Input T okens MBPP 100 1000 Number of T okens0.00.20.40.60.81.0Cumulative ProbabilityECDF - # Input T okens MBPP 0.25 CDF at 109 0.5 CDF at 131 0.75 CDF at 173 0.95 CDF at 274 0.99 CDF at 337Figure 10: Mbpp distribution of the length of prompts in terms of number of tokens over the 974 samples. ’test_imports’: [], ’test_list’: [’assert is_not_prime(2) == False’, ’assert is_not_prime(10) == True’ } Text prompt We follow the prompt format described by Austin et al. [2021]: "You are an expert Python programmer, and here is your task: " + sample[’prompt’] + "Your code should pass these tests:\n\n" +"\n".join(sample[’test_list’]) + "\n" Figure 10 depicts the distribution of the length of the tokenized user prompts in terms of number of tokens. The input prompts in the dataset have an average length of approximately 154 tokens after tokenization, with lengths ranging from 88 to 2,265 tokens. The distribution is moderately skewed, with 75% of prompts shorter than 173 tokens and only 1% exceeding 337 tokens. B.5 Apps Dataset The Apps (Automated
https://arxiv.org/abs/2505.16881v1
Programming Progress Standard) dataset, introduced by Hendrycks et al. [2021], is a benchmark designed to evaluate the code generation capabilities of LLM s. It comprises 10,000 Python programming problems sourced from open-access coding platforms like Codeforces and Kattis, with difficulties ranging from introductory to competitive programming levels. We access the dataset from https://huggingface.co/datasets/codeparrot/apps , and choose the "train" split, which contains 5000 samples. Schema Each sample in the Apps dataset has the following structure: {’problem_id’: 899, ’question’: ’Chef wants to teach a lesson of sharing to the students. \nTh[...]’, ’solutions’: ’["from math import ceil\\n\\nfor _ in range(int([...]", "import math\\nT=int(input())\\n\\nfor _ in rang[...]", ...]’, ’input_output’: ’{"inputs": [["2", "5", "1 2 3 4 5", "5", "5 4 3 2 1"]], "outputs": [["3", "5"]]}’, ’difficulty’: ’interview’, ’url’: ’https://www.codechef.com/CROS2020/problems/COCR106’, ’starter_code’: ’’} } Text prompt We follow the prompt format described by Hendrycks et al. [2021]: "QUESTION: " + sample["question"] + "\n\n" + "Your code should start with the following: \n" + sample[’starter_code’] + "\n" + "The following is the standard input and output format: \n" + "Inputs:\n" + "\n".join([str(elem) for elem in json.loads(sample[’input_output’][’inputs’])]) + "\nOutputs:\n" "\n".join([str(elem) for elem in json.loads(sample[’input_output’][’outputs’])]) 16 Analogous to the previous datasets, we include the rows in the text prompt for those sample feature elements for which there is actually data, e.g., we only include starter_code if the element is non-empty. Dataset cleansing Our analysis found a sample instances that relates to factorial calculations, which yields a very long element in the datasample[’input\_output’][’output’]‘ list that prevents it from being transformed to a Dict. Additionally, after tokenizing all the samples of the dataset, we found instances whose tokenization reaches millions of tokens (maximum value 10.489.768). Without loss of generality, we remove these samples from the dataset before performing the splits. Further analysis revelas that the 99thpercentile of the tokenized text prompts lies at 2422, so we truncate all prompts larger than 2500 tokens. Stratified Split The Apps dataset includes a difficulty feature per sample, indicating the difficulty of programming task: introductory ,interview , andcompetition . To maintain consistency and ensure proper statistical data handling, we perform stratified sampling according to thedifficulty variable both for randomly selecting the initial 2000 samples from the dataset, and also for the subsequent train-validation-test splits. 0 500 1000 1500 2000 2500 Number of tokens0.00.20.40.60.81.0CDFCDF # Input T okens Apps CDF 0.25 Perc. at 312.0 0.50 Perc. at 446.0 0.75 Perc. at 655.5 0.95 Perc. at 1237.1 0.99 Perc. at 2395.5 100 1000 Number of tokens(log-scale)0.00.20.40.60.81.0CDFCDF per difficulty Apps Difficulty competition interview introductory 010002000300040005000Num. of Samples 36120002638Sample Counts (a) Apps dataset: distributions over the complete 4999 samples. 0 500 1000 1500 2000 2500 Number of tokens0.00.20.40.60.81.0CDFCDF # Input T okens Apps CDF 0.25 Perc. at 307.8 0.50 Perc. at 441.0 0.75 Perc. at 650.0 0.95 Perc. at 1253.1 0.99 Perc. at 2105.5 100 1000 Number of tokens(log-scale)0.00.20.40.60.81.0CDFCDF per difficulty Apps Difficulty competition interview introductory 0500100015002000Num. of Samples 1458001055Sample Counts (b) Apps dataset: distribution over the selected 2000 samples. Figure 11: Apps dataset prompt lengths distributions (number of tokens) from both the complete dataset, and from the 2000 randomly stratified
https://arxiv.org/abs/2505.16881v1
sampled instances. The sampled subset shares a similar distribution as the original dataset. Figure 11 shows the distribution of the length of the tokenized user prompts in terms of number of tokens for both the filtered version of the dataset and for the 2000 samples considered. The input prompts in the APPS dataset have a mean token length of approximately 545, with a wide range spanning from 87 to 2,534 tokens. The distribution is heavily right-skewed, with 75% of prompts under 650 tokens and the top 1% exceeding 2,100 tokens. As shown in Figure 11, the prompt length correlates with problem difficulty, with competition-level tasks tending to have significantly longer inputs than interview or introductory problems. The dataset is dominated by introductory and interview tasks, comprising over 90% of the total sample count. B.6 DS-1000 Dataset The DS-1000 dataset, introduced by Lai et al. [2023], is a benchmark comprising 1,000 real-world data science coding problems collected from StackOverflow, covering seven widely-used Python libraries such as NumPy, Pandas, and Matplotlib. To prevent memorization by models, many problems have been perturbed through surface or semantic modifications, ensuring they differ from their original StackOverflow sources. We access the dataset from https://huggingface.co/ datasets/xlangai/DS-1000 . 17 0 250 500 750 1000 1250 1500 1750 2000 Number of T okens020406080FrequencyHistogram - # Input T okens DS-1000 0 250 500 750 1000 1250 1500 1750 2000 Number of T okens0.00.20.40.60.81.0Cumulative ProbabilityECDF - # Input T okens DS-1000 0.25 CDF at 169 0.5 CDF at 283 0.75 CDF at 395 0.95 CDF at 696 0.99 CDF at 1018Figure 12: DS-1000 distribution of the length of prompts in terms of number of tokens over the 1000 samples. Schema Each sample in the DS-1000 dataset has the following structure: {’prompt’: "Problem:\nIn pandas, how do I replace &AMP; with ’&’[...]", ’reference_code’: "def g(df):\n return df.replace(’&AMP;’,’&’, rege[...]", ’metadata’: {’problem_id’: 100, ’library_problem_id’: 100, [...]}, ’code_context’: ’import pandas as pd\nimport numpy as np\nimport copy\n[...]’ } Text prompt According to Lai et al. [2023], we take the prompt feature from each sample as its text prompt. As shown in the huggingface-hosted DS-1000 dataset, this feature already contains all the information necesary for building the prompt to the model. The input prompts in the DS-1000 dataset have a mean token length of around 317, with values ranging from 67 to 2,109 tokens. Figure 12 depicts that the distribution of input length tokens is right-skewed, as 75% of prompts are shorter than 395 tokens, while only 1% exceed 1,018 tokens. The histogram reveals a concentration of prompts between 100 and 400 tokens, and the CDF plot confirms that most examples fall well below 700 tokens. B.7 BigCodeBench Dataset Zhuo et al. [2025] introduced the BigCodeBench dataset, a comprehensive benchmark designed to evaluate LLM s on complex code generation tasks. We select the "Instruct" variant of the dataset, which offers concise natural language prompts to test instruction-following capabilities of the mod- els. We access the dataset through huggingface under https://huggingface.co/datasets/ bigcode/bigcodebench , and we select the latest version available at the time of writing (v0.1.4), which contains 1140 samples. Schema
https://arxiv.org/abs/2505.16881v1
Each sample in the BigCodeBench dataset has the following structure: {’task_id’: ’BigCodeBench/222’, ’complete_prompt’: ’import math\nimport numpy as np\nimport matpl[...]’, ’instruct_prompt’: ’Sort the given list in ascending order based on the deg[...]’, ’canonical_solution’: ’ sorted_list = sorted(list_input, key=l[...]’, ’code_prompt’: ’import math\nimport numpy as np\nimport matplotlib.pyplot [...]’, ’test’: "import unittest\nimport doctest\nclass TestCases(unittest.TestCase):\n[...]", ’entry_point’: ’task_func’, ’doc_struct’: ’{"description": ["Sort the given list in ascending [...]."], "notes": [], "params": ["list_input (list): The list to be sorted."], "returns": ["tuple: A tuple containing:[...]", [...]]}’, ’libs’: "[’math’, ’numpy’, ’matplotlib’]" } Text prompt Zhuo et al. [2025] have transformed the tasks in the prompts of the dataset via pre-defined rules to create a natural-language oriented instructions for prompting LLM s. This transformation into natural language is encapsulated in the instruct_prompt feature of the dataset samples, and we consider this string as the text prompt for our models. Most input prompts in BigCodeBench are relatively concise, with a mean length of about 180 tokens and a median of 164. The distribution is tightly packed, as 75% of prompts fall below 205 tokens and 18 200 400 600 800 1000 1200 Number of T okens050100150FrequencyHistogram - # Input T okens BigCodeBench 200 400 600 800 1000 1200 Number of T okens0.00.20.40.60.81.0Cumulative ProbabilityECDF - # Input T okens BigCodeBench 0.25 CDF at 137 0.5 CDF at 164 0.75 CDF at 205 0.95 CDF at 308 0.99 CDF at 400Figure 13: BigCodeBench distribution of the length of prompts in terms of number of tokens over the 1140 samples. Table 2: Open source LLM s models considered, with their corresponding model ID from the Huggingface platform, and the number of Nvidia H100 GPUs used for generating the model responses. Short Name HuggingFace Model Name H100s used Organization llama-1B meta-llama/Llama-3.2-1B-Instruct 2 Metallama-3B meta-llama/Llama-3.2-3B-Instruct 2 llama-8B meta-llama/Llama-3.1-8B-Instruct 2 llama-70B meta-llama/Llama-3.3-70B-Instruct 4 mist-7B mistralai/Mistral-7B-Instruct-v0.3 2 Mistral AI minist-8B mistralai/Ministral-8B-Instruct-2410 2 mixt-8x7B mistralai/Mixtral-8x7B-Instruct-v0.1 4 qwen-7B Qwen/Qwen2.5-7B-Instruct-1M 2 Alibaba Cloud qwen-14B Qwen/Qwen2.5-14B-Instruct 4 qwen-32B Qwen/Qwen2.5-32B-Instruct 4 phi4 microsoft/Phi-4-mini-instruct 2 Microsoft gemma-4B google/gemma-3-4b-it 2Googlegemma-12B google/gemma-3-12b-it 2 nearly all (99%) are under 400. The histogram reveals a strong peak between 130–200 tokens, while the ECDF confirms a steep accumulation of samples in this lower range. Only a few outliers exceed 1,000 tokens, suggesting limited variation in prompt length. C LLMs Used The list of the LLM s used for generating the CASTILLO dataset are listed in Table 2. All models were instantiated using HuggingFace transformers library [Wolf et al., 2020]. For hardware resources, we generated the responses in an HPC cluster using different numnber of Nvidia H100 GPUs per model, as observed in Table 1. We allocate 20 CPUs and 64 - 128 GB of RAM per GPU utilized. D Identifying and Filtering out Text Degeneration Instances This section provides further detail into how we identified and filtered instances of text degeneration in the CASTILLO default dataset, as summarized in Algorithm 1. We initially observed that certain samples exhibited unusually high standard deviations in their output_sizes —in some cases, the standard deviation was greater than the mean. These cases suggested highly inconsistent response lengths across generations for the same prompt. However, upon manual
https://arxiv.org/abs/2505.16881v1
inspection, we found that not all such samples were degenerated: some included valid long-form responses or variations in verbosity that did not involve repetition or other undesirable generation patterns (see Figure 14 for an illustrative non-degenerated example). 19 Model: mistralai--Ministral-8B-Instruct-2410Dataset: AlpacaOutput Sizes: [84, 13, 13, 10, 20, 157, 10, 154, 16, 18] à MEAN: 49.5 STD-DEV: 56.95Prompt (Hash ID: 31f44822a113f113): Generate a tagline for a social media campaign that promotes healthy eating habits in teenagers.Shortest Response: "Fuelling Teenage Superpowers!"Longest Response: "Eat Up, Grow Strong: Your Teen Years with Every Bite!”Alternative Options: 1. "Fuel Your Future, One Meal at a Time!" 2. "Nourish Today, Shine Tomorrow!" 3. "The Healthy Habits Journey Starts Here!" 4. "Every Meal Matters, Feel Great in Every Minute!" 5. "Teenage Power Foods: Unleash Your Potential!" 6. "Your Body, Your Choice, Eat Smart!" 7. "Nourish the Teen: Taste the Difference!" 8. "Growing Strong, One Healthy Habit at a Time!" 9. "Eat Smart, Be Smart: Your Teen Transformation!" 10. " Teenage Nutrition: You Are What You Eat!"(a) Difference in verbose and not-verbose output from minist-8B model on a sample from the Alpaca dataset. Model: microsoft/Phi-4-mini-instructDataset: AppsOutput Sizes: [28, 28, 93, 28, 294, 98, 28, 283, 28, 28]. à MEAN: 93.6 STD-DEV: 100.93Prompt (Hash ID: da890300d0078f80): QUESTION: You're laying out a rad pixel art mural to paint on your living room wall in homage to [Paul Robertson](http://68.media.tumblr.com/0f55f7f3789a354cfcda7c2a64f501d1/tumblr_o7eq3biK9s1qhccbc. […]Shortest Response: ```python def is_divisible(wall_length, pixel_size): return wall_length % pixel_size == 0 ```Longest Response: The function `is_divisible()` checks if `wall_length` is perfectly divisible by `pixel_size`. In Python, you can achieve this using the modulus (`%`) operator, but since you specifically mentioned not using an `if` statement, you can use the equality operator (`==`) as follows: ```python def is_divisible(wall_length, pixel_size): return wall_length % pixel_size == 0 # Test cases inputs = [[4050, 27], [4066, 27], [10000, 20], [10005, 20]]expected_outputs = [True, False, True, False] for i, (wall_length, pixel_size) in enumerate(inputs): result = is_divisible(wall_length, pixel_size) print(f"Inputs: {wall_length}, {pixel_size}, Output: {result} (Expected: {expected_outputs[i]})") ``` This function will correctly output whether the wall can […]27, Output: False (Expected: False) Inputs: 10000, 20, Output: True (Expected: True) Inputs: 10005, 20, Output: False (Expected: False) ``` (b) Difference in verbose and not-verbose output from phi4 model on a sample from the Apps dataset. Figure 14: NOtext degeneration examples where the standard-deviation is larger than the average in response lengths. 20 Model: meta-llama/Llama-3.1-8B-Instruct Dataset: Dolly DatasetOutput Sizes: [15001, 15001, 398, 15001, 413, 15001, 381, 384, 300, 15001]Prompt (Hash ID: 4b8950287c9d05cd): What famous surfers have died while surfing?Shortest Response: There have been several famous surfers who have died while surfing. Here are a few examples: 1. **Rip Curl's Mark Foo**: Mark Foo was a well-known professional surfer from Singapore. He died on December 23, 1994, while surfing in Hawaii. Foo was swept underwater by a large wave and drowned. 2. **Andy Irons**: Andy Irons was a three-time world surfing champion from Hawaii. He died on November 27, 2010, at the age of 32. The cause of death was listed as acute mixed drug intoxication. 3. **Derek Hynd**: Derek Hynd is an Australian surfer
https://arxiv.org/abs/2505.16881v1
who has been surfing for over 40 years. He has written several books on surfing and has been a pioneer of the sport. In 2017, Hynd suffered a heart attack while surfing in Bali and died. 4. **Greg Noll**: Greg Noll is an American surfer and big wave rider who has been surfing for over 60 years. He has written several books on surfing and has been a pioneer of the sport. In 2016, Noll was involved in a surfing accident […]5. **Mick Doohan's brother, Geoff Doohan**: Geoff Doohan, brother of the famous motorcycle racer Mick Doohan, was a surfer who died while surfing in 1986.Longest Response: There have been several famous surfers who have died while surfing. Here are a few examples: 1. **Rip Curl's founder, Brian Singer's friend, and a close friend of many surfers, Malcolm Tomlinson's close friend, and close friend of many surfers, Brian Singer's friend, and a close friend of many surfers, Malcolm Tomlinson's close friend, Duke Kahanamoku's friend, and a close friend of many surfers, Malcolm Tomlinson's close friend, Brian Singer's friend, and close friend of many surfers, Malcolm Tomlinson's close friend, Duke Kahanamoku's friend, and a close friend of many surfers, Malcolm Tomlinson's close friend, Brian Singer's friend, and a close friend of many surfers, Malcolm Tomlinson's close friend, Duke Kahanamoku's friend, and a close friend of many surfers, Malcolm Tomlinson's close friend, Brian Singer's friend, and a close friend of many surfers, Malcolm Tomlinson's close friend, Duke Kahanamoku's friend, and a close friend of many surfers, Malcolm Tomlinson's close friend, Brian Singer's friend, and a close friend of many surfers, Malcolm Tomlinson's[…](a) Sentence-level repetition of text in llama-8B model on a sample from the Dolly dataset. Model: meta-llama/Llama-3.1-8B-Instruct Dataset: AlpacaOutput Sizes: [15001, 2280, 15001, 15001, 15001, 1153, 781, 15001, 15001, 15001]Prompt (Hash ID: 2ba10a142c352ec5): If a, b, x, and y are real numbers such that ax+by=3, ax^2+by^2=7, ax^3+by^3=16, and ax^4+by^4=42, find ax^5+by^5.Shortest Response: To solve this problem, we can use a technique involving symmetric polynomials. Let's define two variables u and v as follows: u = ax + by v = ax^2 + by^2 Now, we can express the other two equations in terms of u and v: u^2 = (ax + by)^2 = ax^2 + 2axy + by^2 + 2abxy u^2 = v + 2abxy We can express the third equation in terms of u and v: [...]Now, we can substitute the value of abxy into the equation for u^5 u^5 = 931 + 240abxy u^5 = 931 + 240(6) u^5 = 931 + 1440 u^5 = 2371 So, the value of ax^5 + by^5 is 2371. Longest Response: To find ax^5+by^5, let's first identify a pattern in the given equations. Let's denote the given equations as: 1.ax + by = 3 2.2. ax^2 + by^2 = 7 3.3. ax^3 + by^3 = 16 4. ax^4 + by^4 = 42We can use the following formulas to express the sum of powers of x and y: 1. (x+y)^2 = x^2 + 2xy + y^22. (x+y)^3 = x^3 + 3x^2y + 3xy^2 + y^3[…]Combine like
https://arxiv.org/abs/2505.16881v1
terms: (x+y)^5 = (-14a^3 + 746a^2(1 + 3344/b)/5a^4 - (2016/b)/5a^4 - (3969/b^2)/5a^4 Combine like terms: (x+y)^5 = (-14a^3 + 746a^2(1 + 3344/b)/5a^4 - (2016/b)/5a^4 - (3969/b^2)/5a^4 Combine like terms: (x+y)^5 = (-14a^3 + 746a^2(1 + 3344/b)/5a^4 - (2016/b)/5a^4 - (3969/b^2)/5a^4 […] (b) Block-level repetition of alphanumeric values from llama-8B model on a sample from the Alpaca dataset. Figure 15: Text degeneration examples for sentence-level alphanumeric text blocks. 21 Model: Qwen/Qwen2.5-7B-Instruct-1MDataset: ShareGPTOutput Sizes: [1233, 15001, 923, 1980, 1638, 15001, 15001, 15001, 1219, 1582]Prompt (Hash ID: a31e5383b9280cf7): You are to creating a scenario configuration (pbtxt format) on road (with other road agents like vehicles and pedestrians) that can be loaded into a self driving car simulation system to check if ego vehicle (self […]Shortest Response: Here's a PBTXT configuration for your described scenario: ```protobuf scene_config { localization_point { lat: 37.1233212321 lng: 122.25743921 alt: 0.0 } agent_configs { key: "nurobot" value { engine: PLANNER planner { route_points { lat: 37.1233212321 lng: -122.25743921 alt: 0.0[…]- Maximum Curvature: 0.3. - Lookahead Distance: 5 meters. - A runtime check is added to ensure the pedestrian is crossing the road when the ego vehicle stops within 20 meters.- **Actions:** - The action creates the pedestrian agent at the specified location (37.1233212321, -122.25743921) and sets its initial position 20 meters in front of the ego vehicle. This configuration ensures that the ego vehicle will stop before the pedestrian crosses the road, and both agents have realistic behaviors for a self-driving car simulation.Longest Response: Certainly! Below is the `pbtxt` configuration for your scenario where the ego vehicle (Nurobot) is traveling north at 5 m/s, and a pedestrian crosses the road from the sidewalk. The configuration includes the necessary details for the ego vehicle, the pedestrian as a point follower agent, and the runtime checks to ensure the ego vehicle stops in front of the pedestrian. ```proto syntax = "proto3"; message GeographicPointProto { double lat = 1; double lng = 2; double alt = 3; } message PlannerAgentConfig { repeated GeographicPointProto route_points = 1; double start_yaw = 2; // Application order: tangent,[…]actions { expression: "move_agent(pedestrian, 0, 20, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, […](a) Character-level repetition of numbers from Qwen-7B model on a sample from the ShareGPT dataset. Model: microsoft/Phi-4-mini-instructDataset: AppsOutput Sizes:
https://arxiv.org/abs/2505.16881v1
[693, 479, 779, 869, 740, 442, 15001, 637, 697, 721]Prompt (Hash ID: 3e602f902b72fddd): QUESTION: Deoxyribonucleic acid, DNA is the primary information storage molecule in biological systems. It is composed of four nucleic acid bases Guanine ('G'), Cytosine ('C'), Adenine ('A'), and Thymine ('T'). […]Shortest Response: Here is the Python function to translate a given DNA string into an RNA string by replacing all occurrences of the DNA nucleotide 'T' with the RNA nucleotide 'U’:```python def dna_to_rna(dna): return dna.replace('T', 'U’) # Example usage: inputs = ['TTTT', 'GCAT', 'GACCGCCGCC', 'GATTCCACCGACTTCCCAAGTACCGGAAGCGCGACCAACTCGCACAGC’, […]``` This function uses the `str.replace()` method to replace every occurrence of 'T' with 'U' in the input DNA string to create the corresponding RNA string. The example outputs are computed by applying this function to each of the provided input strings. When you run this code, it will print both the input DNA strings and their corresponding output RNA strings..Longest Response: Here is the function `dna_to_rna` that translates a given DNA string into an RNA string: ```python def dna_to_rna(dna): return dna.replace('T', 'U’) # Test cases print(dna_to_rna('TTTT')) # Output: 'UUUU' print(dna_to_rna('GCAT')) # Output: 'GCAU' print(dna_to_rna('GACCGCCGCC')) # Output: 'GACCGCCGCC’print(dna_to_rna('GATTCCACCGACTTCCCAAGTACCGGAAGCGCGACCAACTCGCACAGC')) # Output: 'GAUUCCACCGACUUCCCAAGUACCGGAAGCGCGACCAACUCGCACAGC' print(dna_to_rna('GAAGCTTATCCGTTCCTGAAGGCTGTGGCATCCTCTAAATCAGACTTGGCTACGCCGTTAGCCGAGGGCTTAGCGTTGAGTGTCATTATATACGCGGCCTGCGACCTGGCCACACAATGCCCTCGAAAATTTTTCTTTCGGTTATACGAGTTGCGAAACCTTTCGCGCGTAGACGA[...]ATTAGAAGGATAACGACUGGCCACUACACAUACUCC[…] (b) Character-level repetition of letters from phi4 model on a sample from the Apps dataset. Figure 16: Text degeneration examples for character-level alphanumeric sequences. 22 Algorithm 1 SanitizeDataset to Identify and Sanitize Degenerated Samples 1:procedure SANITIZE DATASET (dataset) 2: Initialize degeneration_dataset ←[ ] 3: Initialize sanitized_dataset ←[ ] 4: foreach sample indataset do 5: Compute max_output_size ,output_mean ,output_std 6: Determine has_max_length_degeneration 7: Determine has_high_variance_degeneration 8: has_degeneration ←has_max_length_degeneration orhas_high_variance_degeneration 9: ifhas_degeneration then 10: Add sample todegeneration_dataset 11: ifsample cannot be sanitized then 12: continue 13: end if 14: Create sanitized_sample from sample 15: Remove degenerated outputs based on condition 16: ifsanitized outputs are non-empty then 17: Recompute statistics (mean, std, percentiles) 18: Mark longest response as “TEXT DEGENERATION” 19: Add sanitized_sample tosanitized_dataset 20: end if 21: else 22: Add original sample tosanitized_dataset 23: end if 24: end for 25: return degeneration_dataset ,sanitized_dataset 26:end procedure To systematically identify text degeneration samples, we first focus on samples containing outputs that reached the maximum token limit imposed during generation (15,000 tokens). We find that all samples with one or more outputs of length ≥14,999 tokens contained instances of text degeneration. These instances typically exhibit either repetition at the token-level (see Figure 16), and sentence- or text-block level (see Figure 15). We thus established the first filtering heuristic: flagging any sample where at least one output in the batch breached 15,000 tokens generation. Further empirical inspection revealed that some degenerated samples did not reach the 15,000-token cap but still contained text degeneration patterns, such as repeated numbers or characters. These were harder to detect using a simple max-length criterion. We noticed that many of these cases exhibit a high variance in output lengths and often included at least one particularly verbose response. To capture these, we introduced a second heuristic: flag any sample where both the standard deviation ofoutput_sizes exceeded twice the mean andwhere the maximum output size was at least 8,500 tokens. Samples matching either of
https://arxiv.org/abs/2505.16881v1