text
string
source
string
time series. Our benchmark STEB computes indicators for the reliability and consistency of the scores and tracks the running time for computing each measure. To this end, we employ an array of TS transformations along a modulation path to control data modification. We utilized STEB to compare and rank 41 quantitative measures and found that the choice of TS embedding has significant impact on the measure’s score. As STEB will be open-sourced after acceptance, we plan to improve and extend this benchmark with the community. This includes 9 the handling of variable length TS, the addition of other measures, and new transformations. Lastly, it would be interesting to investigate the measures’ sensitivity to the sizes of real and synthetic datasets. References [1]Ahmed Alaa, Boris Van Breugel, Evgeny S. Saveliev, and Mihaela van der Schaar. How faithful is your synthetic data? Sample-level metrics for evaluating and auditing generative models. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvari, Gang Niu, and Sivan Sabato, editors, Proceedings of the 39th International Conference on Machine Learning , volume 162 of Proceedings of Machine Learning Research , pages 290–306. PMLR, 17–23 Jul 2022. [2]Yihao Ang, Qiang Huang, Yifan Bao, Anthony K. H. Tung, and Zhiyong Huang. Tsgbench: Time series generation benchmark. Proc. VLDB Endow. , 17(3):305–318, November 2023. [3]Hiba Arnout, Johannes Kehrer, Johanna Bronner, and Thomas Runkler. Visual evaluation of generative adversarial networks for time series data. arXiv preprint , December 2019. [4]Christoph Bandt and Bernd Pompe. Permutation entropy: a natural complexity measure for time series. Physical review letters , 88(17):174102, 2002. [5]Serguei Barannikov, Ilya Trofimov, Grigorii Sotnikov, Ekaterina Trimbach, Alexander Korotin, Alexander Filippov, and Evgeny Burnaev. Manifold topology divergence: a framework for comparing data manifolds. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , volume 34, pages 7294–7305. Curran Associates, Inc., 2021. [6]Donald J Berndt and James Clifford. Using dynamic time warping to find patterns in time series. In AAAI-94 Workshop on Knowledge Discovery in Databases , volume 10, pages 359–370, Menlo Park, California, 1994. The AAAI Press. [7]R. Bousseljot, D. Kreiseler, and A. Schnabel. Nutzung der EKG-Signaldatenbank CARDIODAT der PTB über das Internet, January 1995. [8]Eoin Brophy, Zhengwei Wang, Qi She, and Tomás Ward. Generative adversarial networks in time series: A systematic literature review. ACM Comput. Surv. , 55(10), February 2023. [9]Luis M. Candanedo, Véronique Feldheim, and Dominique Deramaix. Data driven prediction models of energy use of appliances in a low-energy house. Energy and Buildings , 140:81–97, 2017. [10] Robert B Cleveland, William S Cleveland, Jean E McRae, and Irma Terpenning. Stl: A seasonal-trend decomposition. J. off. Stat , 6(1):3–73, 1990. [11] Hoang Anh Dau, Eamonn Keogh, Kaveh Kamgar, Chin-Chia Michael Yeh, Yan Zhu, Shaghayegh Gharghabi, Chotirat Ann Ratanamahatana, Yanping, Bing Hu, Nurjahan Begum, Anthony Bagnall, Abdul- lah Mueen, Gustavo Batista, and Hexagon-ML. The ucr time series classification archive, October 2018. https://www.cs.ucr.edu/~eamonn/time_series_data_2018/ . [12] Janez Demšar. Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research , 7:1–30, 2006. [13] Cristóbal Esteban, Stephanie L. Hyland, and Gunnar Rätsch. Real-valued
https://arxiv.org/abs/2505.21160v1
(medical) time series generation with recurrent conditional gans. arXiv preprint , June 2017. [14] Joao Fonseca and Fernando Bacao. Tabular and latent space synthetic data generation: a literature review. Journal of Big Data , 10(1):115, 2023. [15] Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. Unsupervised scalable representation learning for multivariate time series. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d 'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 32. Curran Associates, Inc., 2019. [16] A. L. Goldberger, L. A. N. Amaral, L. Glass, J. M. Hausdorff, P. Ch. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation [Online] , 101(23):e215–e220, 2000 (June 13). [17] Fuqiang Gu, Mu-Huan Chung, Mark Chignell, Shahrokh Valaee, Baoding Zhou, and Xue Liu. A survey on deep learning for human activity recognition. ACM Comput. Surv. , 54(8), October 2021. 10 [18] J L Hodges Jr. The significance probability of the smirnov two-sample test. Arkiv för matematik , 3(5):469– 486, 1958. [19] Gao Huang, Yang Yuan, Qiantong Xu, Chuan Guo, Yu Sun, Felix Wu, and Kilian Weinberger. An empirical study on evaluation metrics of generative adversarial networks. arXiv preprint , August 2018. [20] Ali Ismail-Fawaz, Maxime Devanne, Stefano Berretti, Jonathan Weber, and Germain Forestier. Establishing a unified evaluation framework for human motion generation: A comparative analysis of metrics. arXiv preprint , 2024. [21] Paul Jeha, Michael Bohlke-Schneider, Pedro Mercado, Shubham Kapoor, Rajbir Singh Nirwan, Valentin Flunkert, Jan Gasthaus, and Tim Januschowski. PSA-GAN: Progressive self attention GANs for synthetic time series. In International Conference on Learning Representations , 2022. [22] Shruti Kaushik, Abhinav Choudhury, Pankaj Kumar Sheron, Nataraj Dasgupta, Sayee Natarajan, Larry A Pickett, and Varun Dutt. Ai in healthcare: time-series forecasting using statistical, neural, and ensemble architectures. Frontiers in big data , 3:4, 2020. [23] Patrik Joslin Kenfack, Daniil Dmitrievich Arapov, Rasheed Hussain, S.M. Ahsan Kazmi, and Adil Khan. On the fairness of generative adversarial networks (gans). In 2021 International Conference "Nonlinearity, Information and Robotics" (NIR) , pages 1–7, 2021. [24] William H. Kruskal and W. Allen Wallis. Use of ranks in one-criterion variance analysis. Journal of the American Statistical Association , 47(260):583–621, 1952. [25] Tuomas Kynkäänniemi, Tero Karras, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Improved precision and recall metric for assessing generative models. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d 'Alché- Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 32. Curran Associates, Inc., 2019. [26] Guokun Lai, Wei-Cheng Chang, Yiming Yang, and Hanxiao Liu. Modeling long- and short-term temporal patterns with deep neural networks. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval , SIGIR ’18, page 95–104. Association for Computing Machinery, 2018. [27] Gregory R. Lee, Ralf Gommers, Filip Waselewski, Kai Wohlfahrt, and Aaron O’Leary. Pywavelets: A python package for wavelet analysis. Journal of Open Source Software , 4(36):1237, 2019. [28] Mark Leznik, Arne Lochner, Stefan Wesner, and Jörg Domaschka. [sok] the
https://arxiv.org/abs/2505.21160v1
great gan bake off, an extensive systematic evaluation of generative adversarial network architectures for time series synthesis. Journal of Systems Research , 2(1), 2022. [29] Xiaomin Li, Vangelis Metsis, Huangyingrui Wang, and Anne Hee Hiong Ngu. Tts-gan: A transformer-based time-series generative adversarial network. In Martin Michalowski, Syed Sibte Raza Abidi, and Samina Abidi, editors, Artificial Intelligence in Medicine , pages 133–143, Cham, 2022. Springer International Publishing. [30] Xiaomin Li, Anne Hee Hiong Ngu, and Vangelis Metsis. Tts-cgan: A transformer time-series conditional gan for biosignal data augmentation. arXiv preprint , June 2022. [31] David Lopez-Paz and Maxime Oquab. Revisiting classifier two-sample tests. In International Conference on Learning Representations , Toulon, France, April 2017. [32] Hang Lou, Siran Li, and Hao Ni. Pcf-gan: generating sequential data via the characteristic function of measures on the path space. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 39755–39781. Curran Associates, Inc., 2023. [33] Carl H Lubba, Sarab S Sethi, Philip Knaute, Simon R Schultz, Ben D Fulcher, and Nick S Jones. catch22: CAnonical Time-series CHaracteristics: Selected through highly comparative time-series analysis. Data Mining and Knowledge Discovery , 33(6):1821–1852, 2019. [34] Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are gans created equal? a large-scale study. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 31. Curran Associates, Inc., 2018. [35] Maggie, Oren Anava, Vitaly Kuznetsov, and Will Cukierski. Web traffic time series forecasting. Kaggle , 2017. 11 [36] H. B. Mann and D. R. Whitney. On a test of whether one of two random variables is stochastically larger than the other. The Annals of Mathematical Statistics , 18(1):50–60, 1947. [37] Casey Meehan, Kamalika Chaudhuri, and Sanjoy Dasgupta. A non-parametric test to detect data-copying in generative models. In Silvia Chiappa and Roberto Calandra, editors, Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics , volume 108 of Proceedings of Machine Learning Research , pages 3546–3556. PMLR, August 2020. [38] Daniela Micucci, Marco Mobilio, and Paolo Napoletano. Unimib shar: A dataset for human activity recognition using acceleration data from smartphones. Applied Sciences , 7(10), 2017. [39] Muhammad Ferjad Naeem, Seong Joon Oh, Youngjung Uh, Yunjey Choi, and Jaejun Yoo. Reliable fidelity and diversity metrics for generative models. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning , volume 119 of Proceedings of Machine Learning Research , pages 7176–7185. PMLR, July 2020. [40] Hao Ni, Lukasz Szpruch, Magnus Wiese, Shujian Liao, and Baoren Xiao. Conditional sig-wasserstein gans for time series generation. arXiv preprint , June 2020. [41] Alexander Nikitin, Letizia Iannucci, and Samuel Kaski. Tsgm: A flexible framework for generative modeling of synthetic time series. In A. Globerson, L. Mackey, D. Belgrave, A. Fan, U. Paquet, J. Tomczak, and C. Zhang, editors, Advances in Neural Information Processing Systems , volume 37, pages 129042– 129061. Curran Associates, Inc., 2024. [42] Skyler Norgaard, Ramyar Saeedi, Keyvan Sasani,
https://arxiv.org/abs/2505.21160v1
and Assefaw H. Gebremedhin. Synthetic sensor data generation for health applications: A supervised deep learning approach. In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) , pages 1164–1167, 2018. [43] Kun Ouyang, Reza Shokri, David S. Rosenblum, and Wenzhuo Yang. A non-parametric generative model for human trajectories. In Proceedings of the 27th International Joint Conference on Artificial Intelligence , IJCAI’18, page 3812–3817. AAAI Press, 2018. [44] Marco A. F. Pimentel, Alistair E. W. Johnson, Peter H. Charlton, Drew Birrenkott, Peter J. Watkinson, Lionel Tarassenko, and David A. Clifton. Toward a robust estimation of respiratory rate from pulse oximeters. IEEE Transactions on Biomedical Engineering , 64(8):1914–1923, 2017. [45] Zhaozhi Qian, Rob Davis, and Mihaela van der Schaar. Synthcity: a benchmark framework for diverse use cases of tabular synthetic data. In A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, editors, Advances in Neural Information Processing Systems , volume 36, pages 3173–3188. Curran Associates, Inc., 2023. [46] Suman Ravuri and Oriol Vinyals. Classification accuracy score for conditional generative models. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d 'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 32. Curran Associates, Inc., 2019. [47] Eitan Richardson and Yair Weiss. On gans and gmms. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 31. Curran Associates, Inc., 2018. [48] Joshua S Richman and J Randall Moorman. Physiological time-series analysis using approximate entropy and sample entropy. American journal of physiology-heart and circulatory physiology , 278(6):H2039– H2049, 2000. [49] Ali Seyfi, Jean-Francois Rajotte, and Raymond Ng. Generating multivariate time series with common source coordinated gan (cosci-gan). Advances in Neural Information Processing Systems , 35:32777–32788, 2022. [50] Sahil Sidheekh, Aroof Aimen, and Narayanan C Krishnan. On characterizing gan convergence through proximal duality gap. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning , volume 139 of Proceedings of Machine Learning Research , pages 9660–9670. PMLR, 18–24 Jul 2021. [51] Michael Stenger, André Bauer, Thomas Prantl, Robert Leppich, Nathaniel Hudson, Kyle Chard, Ian Foster, and Samuel Kounev. Thinking in categories: A survey on assessing the quality for time series synthesis. J. Data and Information Quality , 16(2), June 2024. [52] Michael Stenger, Robert Leppich, Ian Foster, Samuel Kounev, and André Bauer. Evaluation is key: a survey on evaluation measures for synthetic time series. Journal of Big Data , 11(1):66, 2024. 12 [53] Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. In International Conference on Learning Representations , 2016. [54] Raphael Vallat. Antropy, 2024. Version 0.1.8. [55] Boris van Breugel, Trent Kyono, Jeroen Berrevoets, and Mihaela van der Schaar. Decaf: Generating fair synthetic data using causally-aware generative networks. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems , volume 34, pages 22221–22233. Curran Associates, Inc., 2021. [56] Boris van Breugel, Hao Sun,
https://arxiv.org/abs/2505.21160v1
Zhaozhi Qian, and Mihaela van der Schaar. Membership inference attacks against synthetic data through overfitting detection. In Francisco Ruiz, Jennifer Dy, and Jan-Willem van de Meent, editors, Proceedings of The 26th International Conference on Artificial Intelligence and Statistics , volume 206 of Proceedings of Machine Learning Research , pages 3493–3514. PMLR, 25–27 Apr 2023. [57] Boris van Breugel and Mihaela van der Schaar. Beyond privacy: Navigating the opportunities and challenges of synthetic data. arXiv preprint , April 2023. [58] Magnus Wiese, Lianjun Bai, Ben Wood, and Hans Buehler. Deep hedging: Learning to simulate equity option markets. arXiv preprint , November 2019. [59] Tianlin Xu, Li Kevin Wenliang, Michael Munn, and Beatrice Acciaio. Cot-gan: Generating sequential data via causal optimal transport. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems , volume 33, pages 8798–8809. Curran Associates, Inc., 2020. [60] Jinsung Yoon, Daniel Jarrett, and Mihaela van der Schaar. Time-series generative adversarial networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d 'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems , volume 32. Curran Associates, Inc., 2019. [61] Zhihan Yue, Yujing Wang, Juanyong Duan, Tianmeng Yang, Congrui Huang, Yunhai Tong, and Bixiong Xu. Ts2vec: Towards universal representation of time series. Proceedings of the AAAI Conference on Artificial Intelligence , 36(8):8980–8987, June 2022. A Summary of Evaluation Measures Below, we list each measure analyzed in this work in alphabetical order and briefly describe it. For many of them, a detailed definition can be found in [ 52]. In any case, we reference the original work introducing each measure as well as other sources used in their (re-)implementation. An asterisk indicates that the measure requires a TS embedding. In this case, we use vectors ⃗ x, ⃗ yto represent TS X, Y .lis the TS length, dthe number of feature channels, and δthe embedding dimension. ACS [29]. Average cosine similarity (ACS) compares all pairs of real TS Xand synthetic TS Y with respect to their cosine similarity⃗ x·⃗ y ||⃗ x||||⃗ y||. The vectors ⃗ xand⃗ yare calculated by aggregating seven different statistics of XandY, respectively. If the data is labeled, the pairs are only constructed within each class. The final score is computed by averaging the similarities of all pairs. ApEn [28]. Approximate entropy (ApEn) was proposed by [ 48] to determine the regularity and complexity of univariate time series. The synthetic data measure is derived by computing the squared difference between the approximate entropy of each channel of the real and synthetic time series. [28] subsample both datasets to speed up the computation due to the quadratic complexity. We chose a sample size of n= 100 . Authenticity* [1]. Authenticity seeks to measure the proportion of synthetic instances that are novel relative to the real dataset by comparing the distance of the nearest synthetic neighbor to that of the nearest real one. The distances are computed in a spherical embedding space where samples closer to the origin are meant to be typical instances of the real data distribution.
https://arxiv.org/abs/2505.21160v1
Autocorrelation [40]. The autocorrelation measure is the squared difference of auto-correlation ma- trices computed for the real and synthetic dataset, respectively. The auto-correlations are determined for each channel up to lagl 4and averaged across the TS. 13 C2ST [31]. The classifier 2-sample test (C2ST) generally assesses whether two sets of data points are sampled from the same distribution. Here, this is realized through a DL-based binary classifier c:X→ {0,1}applied to real data as class 0and synthetic data as class 1, combined with a hypothesis test on the class predictions. cis trained on Dtrainand the predictions taken from Dheld_out and a portion of Ds. CT*[37]. The data copying test targets generator overfitting, that is, it detects synthetic data that are merely minimal variations of real data instances. Using DtrainandDheld_out , the measure compares the distances between TS from DtrainandDheld_out with those between TS from DtrainandDsusing an hypothesis test. Ideally, these distances should be approximately even in both cases. CAS [46]. The classifier accuracy score (CAS) is a measure for conditional generative models, meaning it requires labeled data. This method trains a deep classifier separately on DSandDtrain, yielding two models. Both are evaluated on Dheld_out to see if the accuracies achieved by both models are similar. Context-FID* [21]. Context-FID is a derivative of the “regular” Fréchet inception distance (FID) popular in image synthesis. The original implementation of Context-FID used the unsupervised rep- resentation model proposed by [ 15], while we separated the embedding step and distance calculation. We replaced the representation model with a newer method, TS2Vec. Coverage* [39]. This measure counts the number of real data instances ⃗ xfor which there is a synthetic instance ⃗ yin its neighborhood and divides it by the size of Dr. Coverage Cis defined as C:=1 |Dr|X ⃗ x∈Dr1{∃⃗ y∈Ds:⃗ y∈B(⃗ x,dNN k(⃗ x, D r))}. (2) where dNN k(⃗ x, D)is the distance of vector ⃗ xto the kth-nearest neighbor in DandB(c, r)a ball in the vector space with center cand radius r. Density* [39]. The density measure determines for each synthetic instance in how many neighbor- hoods of real instances it is located, adds the result up across all synthetic instance, and divides the sum by the neighborhood size times the size of the synthetic dataset. For the embedded real ⃗ xand synthetic ⃗ y, density Dis given by D:=1 K|Ds|X ⃗ y∈DsX ⃗ x∈Dr1{⃗ y∈B(⃗ x,dNN k(⃗ x, D r))}, (3) where dNN k(⃗ x, D)is the distance of vector ⃗ xto the kth-nearest neighbor in DandB(c, r)a ball in the vector space with center cand radius r. Detection_MLP* [45]. This is a variant of Discriminative score applied to already embedded data and with a multilayer perceptron (MLP) with depth two and 100 hidden units as discriminator model. Its score is the AUCROC score of classifying real and synthetic data. Detection_XGB* [45]. This is a variant of Discriminative score applied to already embedded data and with XGBoost classifier as discriminator model. Its score is the AUCROC score of classifying real and synthetic data. Detection_GMM* [45]. This is a variant of discriminative score applied to
https://arxiv.org/abs/2505.21160v1
already embedded data and with a Gaussian mixture model (GMM) as discriminator. Its score is the AUCROC score of classifying real and synthetic data. Detection_linear* [45]. This is a variant of discriminative score applied to already embedded data and with a logistic regression classifier as discriminator model. Its score is the AUCROC score of classifying real and synthetic data. Discriminative score [60]. Uses a post-hoc RNN to classify (i.e., discriminate) original data and synthetic data with the achieved accuracy as score. We re-inplemented the original architecture, but updated the training procedure and standardized it for all DL-based discriminators. 14 Distributional metric [58]. Distributional metric compares the real to the synthetic data distribution based on their probability mass function. To this end, a binning of the values in each channel across the real dataset on one, and the synthetic dataset on the other side, is performed. Finally, the mean absolute difference between binnings of corresponding channels of real and synthetic datasets is calculated and averaged over all channels. DOMIAS* [56]. DOMIAS is a data privacy measure centered around a density-based membership inference attack. The attack aims to infer membership by targeting local overfitting of the generative model. This measure was proposed in three variants in the original work. We utilize the best- performing one which uses a block neural auto-regressive flow (BNAF) for density estimation. FBCA* [49]. Feature-based correlation analysis (FBCA) calculates the discrepancy of correlations of feature vectors extracted from the real and synthetic time series. It applies five different statistics to the two correlation matrices. These are mean absolute error, mean squared error, Frobenius norm, Kendall rank correlation coefficient, and Spearman rank correlation coefficient. ICD [28] Intra-class distance (ICD) measures the average distance between the generated TS using the dynamic time-warping distance (DTW) [6]. Formally, ICD is defined as ICD:=P Y∈DsP Y′∈DsDTW (Y, Y′) |Ds|2. (4) [28] subsample Dsto speed up the computation due to the quadratic complexity. We chose a sample size of n= 100 , considerably more representative than the original n= 10 . Improved precision* [25]. This measure counts and averages the number of generated TS for which there is a real TS in its vicinity. More formally, IP=1 |Ds|X ⃗ y∈Ds1{∃⃗ x∈Dr:⃗ y∈B(⃗ x,dNN k(⃗ x, D r))}, (5) where dNN k(⃗ x, D)is the distance of vector ⃗ xto the kth-nearest neighbor in DandB(c, r)a ball in the vector space with center cand radius r. Improved recall* [25]. IP=1 |Dr|X ⃗ x∈Dr1{∃⃗ y∈Ds:⃗ x∈B(⃗ y,dNN k(⃗ y, D s))}. (6) INND [3]. The incomming nearest neighbor distance (INND) calculates the average dynamic time-warping distance (DTW) of any synthetic TS to its nearest real neighbor. Formally, INND =1 |Ds|X Y∈Dsmin X∈DrDTW (Y, X). (7) [3] subsample DRand use only one synthetic TS to monitor training. To evaluate, we chose a sample size of n= 100 for both datasets. JSD* [43]. Generally speaking, the Jensen-Shannon divergence (JSD) measures the dissimilarity between two distributions. In this case, these are the distributions of scalar values in the TS in the real and synthetic datasets, discretized by binning these values. KLD*
https://arxiv.org/abs/2505.21160v1
[47]. Similar to JSD, the Kullback-Leibler divergence measures the dissimilarity between two distributions. Again, these are the distributions of scalar values in the TS in the real and synthetic datasets, discretized by binning these values. Max-RTS* [42]. The Maximum real to synthetic similarity (Max-RTS) computes the cosine similarity between the real and synthetic TS closest to each other in the embedding space given by Max-RTS = max ⃗ x∈Dr,⃗ y∈Ds⃗ x·⃗ y ||⃗ x||2· ||⃗ y||2 . (8) 15 MTop-Div* [5]. Manifold topology divergence (MTop-Div) determines the discrepancy between the real and synthetic data distribution topologically. The datasets are interpreted as point clouds and topological concepts are used to assess the similarity of the two clouds. NDB* [47]. Number os statistically different bins (NDB) assesses the degree to which the modes in the synthetic match those in the real training dataset. The modes are estimated using a K-means clustering, and matches are determined using a two-sample hypothesis test comparing pairs of real and synthetic clusters. As a baseline, the procedure is repeated with real held-out data. The score is the absolute difference between both sums of matching clusters (real-real vs real-synthetic). NDB-over/under* [37]. This is an adaption of the NDB measure. The main difference is that here, the hypothesis tested is the equality of cluster distributions. That is, the distributions of corresponding real and synthetic clusters should be the same. The goal is to detect under-represented and over-represented data regions. The final score is two-fold: A real number for the number of under-represented clusters, and one for the number of over-represented clusters. ONND [3]. The outgoing nearest neighbor distance (ONND) calculates the average dynamic time-warping distance (DTW) of any real TS to its nearest synthetic neighbor. Formally, ONND =1 |Dr|X X∈Drmin Y∈DsDTW (X, Y ). (9) [3] subsample DRandDsto speed up the computation due to the quadratic complexity. We chose a sample size of n= 100 . Predictive score [60]. This measure trains a simple forecasting model on the generated time series to conduct one-step-ahead predictions. Then, it evaluates its performance on real data using mean absolute error (MAE) and returns its mean. RTS* [42] Real-to-synthetic similarity (RTS) compares the average cosine similarity within the real data to the cosine similarity between all real TS and 10random synthetic TS. More, specifically, the score is given by RTS= 1 10· |Dr|10X i=1X ⃗ x∈Dr⃗ x·⃗ yi ||⃗ x||2· ||⃗ yi||2−|Dr| 2X i̸=j⃗ xi·⃗ xj ||⃗ xi||2· ||⃗ xj||2 . (10) Sig-MMD [32]. Computes the maximum mean discrepancy (MMD) between signature features extracted from the real and synthetic dataset, respectively. We use a signature kernel with random fourier features (RFF) map and tensor random projections (TRP) from the KSig library1. Spatial correlation [28]. Spatial correlation calculates the squared difference of inter-channel Pearson correlations of multivariate real and synthetic TS. For each time series and on both datasets separately, the correlation coefficient is determined for each pair of channels and averaged within the dataset. To reduce computational cost, both datasets are sampled down to n= 100 TS. STS* [42]. Synthetic-to-synthetic similarity measures the “typical” cosine
https://arxiv.org/abs/2505.21160v1
similarity between embedded synthetic TS. Typical means that for every instance, the distances to only five others are calculated. This measure does not use real data. Temporal correlation [28]. Computes the channel-wise correlation between observations of each TS using the frequency peaks exposed by the fast Fourier transformation (FFT). Then, calculates the mean squared difference between these correlations in the real and synthetic dataset. TRTS [13]. “Train on real-test on synthetic” measures how well a model trained on real data performs on the generated data. The absolute difference between the performances on real data versus on synthetic data is used as score. For forecasting, the same model as in predictive score [ 60] is used. 1https://github.com/tgcsaba/KSig 16 TSTR [13]. “Train on synthetic-test on real” is a utility-based measure determining the usefulness of the synthetic data on a downstream task compared to real data. As downstream task, we also choose forecasting and utilize the model used for predictive score [ 60]. A score is computed by taking the absolute performance difference between the forecaster trained on synthetic data and the one trained on real data, evaluated on held-out real test data. This last piece sets the two measures apart. Wavelet coherence score [30]. The wavelet coherence score (WCS) computes the mean pairwise wavelet coherence, which is an analysis technique for time and frequency correlation, between real and synthetic TS. To reduce computational cost, both datasets are sampled down to n= 100 TS. WD* [47]. The Wasserstein-1 distance (WD) is a distance function between probability distribu- tions. In this case, it is applied to the scalar values of the TS in the real and synthetic dataset. For each set a discrete 1D distribution is created by binning the values. The WD is applied to the real and synthetic binning. α-Precision* [1]. This measure is based around a fraction α∈[0,1]of real time series considered “typical” for this data distribution. A synthetic TS falling into the support of the typical part of the real distribution is considered realistic and faithful. A score is deduced by aggregating the deviation between expected fraction of synthetic TS in the support and the actual fraction over different values forα. The support is determined in a spherical embedding space where samples closer to the origin are meant to be typical instances of the real data distribution. β-Recall* [1]. Analogous to α-Precision, this measure is based around a fraction β∈[0,1]of synthetic time series considered “typical” for this generator. For each β, the measure determines the fraction of real TS with at least one typical synthetic TS in its vicinity. A score is derived by taking the average of the divergence between the expected fraction and the actual one across the values for β. The distances are computed in a spherical embedding space where samples closer to the origin are meant to be typical instances of the real data distribution. The following three measures are part of the benchmark, but were not used in any of the experiments: Distribution visualization [60]. This is a qualitative measure, creating a dot-plot visualizing the distributions of
https://arxiv.org/abs/2505.21160v1
the embedded real and synthetic data. The embedding is computed with t-distributed stochastic neighbor embedding (t-SNE) or principal component analysis (PCA) of a subsample of n= 1000 instances each. Visual assessment [3]. This measure is based on the visual evaluation of generated time series data by plotting a (small) subsample of the synthetic dataset. Ideally, the plots are assessed by multiple domain expert. Realism score* [25] The measure approximates instance fidelity via the position of an embedded synthetic TS in the real data manifold. The closer the generated TS is to a real TS compared to other real TS, the more realistic the generated TS is. The Euclidean distance is used for distance calculation. Realism is a sample-level variant of improved precision and therefore not adequate for our dataset-level experiments. Instead, we test improved precision. B Details on Datasets, Embedders, and Randomness B.1 Datasets In both experiments, we used the following ten datasets. This selection covers multiple source domains, a wide range of dataset sizes, TS lengths, and TS dimensions, as well as labeled and unlabeled data. In addition, the values themselves are diverse with respect to three statistical characteristics. Please find a summary of key characteristics in Table 2. Appliances energy The UCI Appliances energy prediction dataset consists of multivariate measure- ments recorded by sensors in a low-energy building, augmented by weather readings and two random variables [ 9]. Measurements were taken at 10-minute intervals for approximately 4.5months. By 17 Table 2: Summary of dataset statistics. For each dataset, this table includes its domain and size, the length and dimension of the contained TS, the number of classes, the average singular value decomposition (SVD) entropy [ 54], the average permutation entropy [ 4], and the average correlation between TS features. Dataset Domain Size TS TS Classes ∅SVD ∅Permuta- ∅Feature length dimension entropy tion entropy correlation Appliances energy Smart home 19592 144 28 - 0.265 1.701 0.044 ElectricDevices Devices 16637 96 1 7 1.392 1.343 - Exchange rate Finance 7559 30 8 - 0.044 2.293 0.304 Google stock Finance 3662 24 6 - 0.271 2.311 0.624 PPG and respiration Medical 21600 125 5 - 0.409 1.670 -0.106 PTB diagnostic ECG Medical 57618 1000 15 11 0.271 2.302 0.098 Sine - 10000 100 2 5 0.348 1.583 0.524 StarLightCurves Sensor 9236 1024 1 3 0.093 1.059 - UniMiB SHAR Motion 11771 151 3 17 1.068 2.229 -0.004 Wikipedia web traffic Networking 117277 550 1 - 0.497 2.522 - sliding a window of 144 steps with stride one along the time axis, we create a set of overlapping, individual, multivariate time series. ElectricDevices This dataset is part of the UCR Time Series Classification Archive [ 11], which comprises 128 labeled subsets from different domains and with different characteristics. We chose ElectricDevices as the best fit with the other sets in our selection. Exchange rate A collection of the daily exchange rates for the eight currencies of Australia, British, Canada, Switzerland, China, Japan, New Zealand, and Singapore, respectively, ranging from 1990 to 2016 [26]. We apply the sliding window approach again with stride one. Google
https://arxiv.org/abs/2505.21160v1
stock This set contains the daily historical Google stocks data from 2004 to 2019 in one continuous, aperiodic sequence with features volume, high, low, opening, closing, and adjusted closing prices [60]. We apply the sliding window approach again with stride one. PPG and respiration Assembled by the Beth Israel Deaconess Medical Centre(BIDMC), this dataset contains physiological signals and static features extracted from the much larger MIMIC-II matched waveform database [ 44,16]. We extracted five dynamic features plus labels from the 45patients for which they are provided: RESP, PLETH, V , A VR, and II. The sequence length of 125corresponds to 1s of recordings (Sampling rate 125Hz). PTB diagnostic ECG The PTB diagnostic ECG database is a collection of 549 15-lead ECGs (i.e., 15 feature channels) for 294 patients, including clinical summaries for each record [ 7,16]. We extract subsequences of 1000 steps, which corresponds to 1s-long recordings (Sampling rate 1000 Hz). Further, we use the eleven diagnosed conditions as class labels. Sine This is a self-crafted dataset of time series composed of two sine waves each. The set contains multiple, imbalanced classes which differ in wave amplitude, x-shift, phase length, and phase offset between feature channels. StarLightCurves This is the second dataset from the UCR Time Series Classification Archive [ 11]. The TS are labeled. UniMiB SHAR Researchers from the University of Milano-Bicocca created this mulitvariate dataset by collecting acceleration samples acquired with an Android smartphone [ 38]. The three features represent X-, Y-, and Z-coordinates. Each instance is labeled with one of 17 activities, which we use as classes. Wikipedia web traffic This set contains visitation data for over 100,000Wikipedia articles [ 35]. Each of the TS included represents the number of daily views of a different Wikipedia article, starting from July 1st, 2015 up until December 31st, 2016. The data was originally compiled for a competition with training and test data. We, however, only use the train set. 18 B.2 Embedders STEB currently offers two non-trivial embedding models, which we also used for our experiments. These are: TS2Vec is a deep-learning model for time series embedding [ 61]. The model features a CNN-based encoder consisting of cascading dilated convolutional blocks. Training is conducted via hierarchical contrasting loss, which is crucial to the method’s success. The learned sequence representations are aggregated from representations of individual time steps created by the convolution blocks. Catch22. The second model is the feature extractor catch22 [ 33]. It computes a diverse set of 24 statistical descriptors of a given univariate time series or one feature channel of a multivariate time series. For the latter, we concatenate the feature vectors of each channel to obtain an embedding for the entire time series, i.e., X7→catch22 (⃗ c0)∥catch22 (⃗ c1)∥. . .∥catch22 (⃗ cd−1), (11) where dis the number of channels. B.3 Randomness In our experiments, randomness plays a role at different stages, mainly in splitting the real dataset after preprocessing, while training the TS2Vec embedder, and during the execution of measures. During one test run, we use the same random seed, specified in the test parametrization, at
https://arxiv.org/abs/2505.21160v1
every step of the test to ensure reproducibility. For the Main experiment, the ten seeds tried are 42, 461900, 854324, 679123, 107460, 952343, 580127, 893234, 560239, and 501932. Due to time constraints, theEmbedders experiment is limited to a subset of five seeds, namely 952343, 580127, 893234, 560239, and 501932. C Additional Benchmark Details Below, we describe the supporting components of STEB in more detail. Their context within STEB and the relationship between components is visualized in Figure 2. Configuration & Management. This component guides and coordinates the experiment, starting with loading and validating the configuration, creating the parameter sets of the tests to run, and initiating the test processing. It monitors the execution and triggers recovery if necessary. Experiments can be flexibly executed in two modes of operation: sequential and parallel. In sequential mode, the tests are processed one after the other; caching and logging are done in the file system. The advantage is low computational overhead and few additional dependencies. In parallel mode, the component spawns a user-specified number of worker instances that select tests from a pool and process them in parallel. Users can also limit CPU and RAM use and choose between GPU-enabled and CPU-only workers. Technically, these workers are Docker containers, which implies additional dependencies to run STEB but speeds up the processing and facilitates (dependency) isolation of the measures to be evaluated. Storage. In the simple sequential mode, everything is stored in a workspace in the file system. In parallel mode, the monitoring, logging, caching, handling of results, and evaluation is optimized using a MongoDB2database. This is also more user-friendly, as database monitoring tools support easy, visual tracking of the experiment progress, particularly failed tests. Caching. Many artifacts such as processed datasets, trained models, or distance matrices produced by preprocessing, embedders, and measures, are duplicated across the modulation steps inside a test and throughout the different tests of an experiment. To speed up the running time, conserve valuable resources, and save energy, caching can be enabled. When it is enabled, each artifact is created once, stored away, and loaded whenever needed—provided that the test parameters match. Recovery. Since individual tests sometimes fail, workers crash, and experiments are interrupted, this component handles the return to a valid program state. This includes seamlessly continuing experiments, automatically restarting workers, and cleaning up inconsistent caches. 2https://www.mongodb.com 19 Table 3: Summary of the expected measure behaviors per category in each test. We differentiate four major behaviors: The score improves ( ↗), worsens ( ↘), remains constant (c), and not applicable (-). Transformations marked with * are preceded by a shuffle of the input dataset, even for κ= 0. Fidelity General- Privacy Represen- ization tativeness Label corruption ↘ c - ↘ Gaussian noise* ↘ ↗ ↗ ↘ Misaligning channels* ↘ c ↗ ↘ Mode dropping* c c ↗ ↘ Mode collapse* c c ↗ ↘ Moving average* ↘ ↗ ↗ ↘ Rare event sensitivity* c c ↗ ↘ Reverse substitution* c ↘ ↘ c Salt & Pepper* ↘ ↗ ↗ ↘ Segment leaking* ↘ ↘ ↘ ↘ STL* ↘ ↗ ↗ ↘ Substitution*
https://arxiv.org/abs/2505.21160v1
c ↗ ↗ c Wavelet transform* ↘ ↗ ↗ ↘ Logging. This component logs various aspects of the experiment execution, informing the user and providing transparency. It records the parameter set for each test and tracks the live status of individual tests, which can be waiting ( todo),ongoing ,successful , orfailed . Additionally, status information is recorded, such as the reason for failure. Furthermore, the beginning and end of each test processing are logged and (optionally) the running times for each embedder and measure invocation are saved. D Details of the Evaluation Procedure In the two subsections below, we outline the mathematical definitions of reliability rreland consistency rcon. An overview of the expected behavior of each measure used to calculate rrelis provided in Table 3. D.1 Calculating the Reliability Indicator Below, we provide the formal definition of rrel(t)for task twith scores s0, . . . , s k−1. Improve, real. rrel(t) =2 k(k−1)k−2X i=0k−1X j=i+11{si< sj} (12) Improve, boolean. Letwt= (wt 0, . . . , wt k−1)T,wf= (wf 0, . . . , wf k−1)Twithwt i=i, wf i= k−i−1be weight vectors to assign points to the scores based on their position on the modulation path. We define rrel(t) =rnominal−rmin rmax−rmin(13) for rnominal =k−1X i=0wt i·1{si}+wf i·1{¬s} (14) and rmin=⌈k 2⌉−1X i=0wt i+k−1X i=⌈k 2⌉wf i, r max=⌈k 2⌉−1X i=0wf i+k−1X i=⌈k 2⌉wt i. (15) Worsen, real. Analogous to improve with>. 20 Worsen, boolean. Analogous to improve withwt i=k−i−1, wf i=iand rmin=⌈k 2⌉−1X i=0wf i+k−1X i=⌈k 2⌉wt i, r max=⌈k 2⌉−1X i=0wt i+k−1X i=⌈k 2⌉wf i. (16) Constant, real. Letµbe the median score of montandϵ= 0.05. We define rrel(t) =1 k−1  (si, µ)| |µ−si| µ ≤ϵ∧si̸=µ (17) Constant, boolean. Similar to real-valued si, for boolean values, we have rrel(t) =1 k−1|{(si, si+1)|XNOR (si, si+1),0≤i < k−1}|. (18) Example. Assume a very small experiment of arbitrary measure mwith just two tests t0, t1, resulting in eleven real-valued scores each, s= [0,0,1,2,4,3,5,6,7,8,7]and σ= [2.8,3.0,2.9,2.9,3.0,3.0,3.1,3.0,3.2,3.1,3.0].(19) swas calculated for transformation misalignment, σfor mode dropping. The value at position 0 was calculated with κ= 0, position 1withκ= 0.1, etc. We will determine rrelfor category fidelity. According to Table 3, we expect diminishing fidelity as the misalignment of channels increases, i.e., sto get worse. Hence, we follow paragraph Worsen, real and calculate rrel(t0) =2 11(11−1)11−2X i=011−1X j=i+11{si> sj}= 0.036. (20) On the other hand, we expect approximately constant fidelity of the remaining samples even if modes collapse. We calculate rrel(t1) =1 11−1  (si,3)| |3−si| 3 ≤0.05∧si̸= 3 = 0.8 (21) based on Constant, real . Hence, rrel=rrel(t0) +rrel(t1) 2= 0.418, (22) attesting ma mediocre to bad reliability in fidelity. D.2 Calculating the Consistency Indicator Now, the computation of consistency rconis as follows, starting with the consistency regarding a changing random seed. First, group the rrel(t)by random seed for all tin the experiment. Assuming there are nrandom seeds, we have G0, . . . , G n−1. Then, we apply the Kolmogorov–Smirnov test for two samples[ 18] to pairs (Gi, Gj), i < j , count the pairs which were identified as following the same distribution, and
https://arxiv.org/abs/2505.21160v1
normalize it by the number of pairs: rcon=2 n(n−1)n−2X i=0n−1X j=i+11{ks_2sample (Gi, Gj)}. (23) The consistency w.r.t. a changing dataset is analogous, just replace “random seed” by “dataset” above. E Details of the Experiments In this section, we provide further details on the experiments conducted, Main andEmbedders . Listing 1 contains the configuration for Main . Parameters in the lower part are used for management purposes and influence only how the tests are executed, not what they do. Statistics computed for both experiments, including the number of tests and their success rate, can be found in Table 4. The reasons for failed tests, on the other hand, are listed in Table 5. They range from excessive running time and (GPU) memory overflow to measure-specific errors that can occur in its normal operation. 21 Listing 1: This is the config file for the Main experiment. It specifies the setup of an experimental run, including which datasets to use (here: all ten), the transformations, embedders, and measures. Transformations can be nested once, i.e., they can be chained within one test and are sequentially applied. For more details, especially the other parameters, please see the documentation of the referenced repository. 1name : main 2datasets : ALL 3transformations : [ 4 [ shuffle , gn_moderate ], 5 [ shuffle , salt_and_peper_noise ], 6 [ shuffle , misalignment ], 7 [ shuffle , substitution ], 8 [ shuffle , mode_dropping ], 9 [ shuffle , mode_collapse ], 10 [ shuffle , reverse_sub_clean ], 11 corrupt_labels , 12 [ shuffle , segment_leaking ], 13 [ shuffle , rare_event_drop ], 14 [ shuffle , moving_average ], 15 [ shuffle , stl_decomposition ], 16 [ shuffle , wavelet_transform ], 17] 18embedders : [ ts2vec ] 19measures : [ 20 icd , ap_en , innd , onnd , spatial , temporal , c_t , sts , max_rts , rts , 21 jsd , kld , wd_on_pmf , auto_corr , wcs , ndbou , ndb , m_top_div , 22 Coverage , Density , improved_precision , improved_recall , 23 distributional_metric , context_fid , discriminative , predictive , 24 detection_mlp , detection_xgb , detection_gmm , detection_linear , 25 tstr , trts , cas , c2st , alpha_precision , beta_recall , authenticity , 26 acs , fbca , domias , sig_mmd 27] 28seeds : [42 , 461900 , 854324 , 679123 , 107460 , 29 952343 , 580127 , 893234 , 560239 , 501932] 30data_feeds : [ feed_train ] 31use_cache : true 32workers : {cpu: [[4 , 100] , [4, 100]] , 33 gpu : [[4 , 100] , [4, 100] , [4, 100]]} 34use_database : true 35rebuild_image : false 36record_runtime : true 37restart_failed : false 38test_time_limit : 120 39compare_results_to : { embedding : []} F Complementary Results Additional results for the experiments can be found here across different tables and figures. In Table 6, we provide a reliability ranking of the measures in the four categories. This is an alternative presentation of the information in Table 1, which is alphabetically sorted. Here, we see more clearly how close the reliability indicators of neighboring measures
https://arxiv.org/abs/2505.21160v1
are. For lack of space in the main body of the paper, we report the consistency indicators in Table 7. They are discussed in Section 5.1. Similarly, the average running times recorded for measure executions are placed in Table 8, those for embedding procedures in Table 11. There are also long versions for both tables, Table 9, Table 10, and Table 12, which additionally include the standard deviation, number of valid measurements, and number of invalid measurements for each measure/embedder-dataset-combination. Additionally, we used STEB’s evaluation component to conducted a statistical analysis of the reliability indicators in each of the four categories. To this end, the Kruskal-Wallis H test [ 24] is employed as omnibus test to determine if there are statistical differences between any of the indicators and the Mann-Whitney U test [ 36] with Bonferroni correction is applied post-hoc to each pair of measures. The results are visualized in four critical difference diagrams [12], Figures 3 through 6. 22 Table 4: Statistics for the experiments: For each measure, this table lists the total number of tests attempted, the number of successful tests, and the success rate. The totals across all measures is provided at the table bottom. Only embedding-dependent measures were tested in the embedders experiment, entries for other measures are marked as not applicable (N/A). Main Embedders Measure #Total #Succ %Succ #Total #Succ %Succ α-Precision 1020 998 98 1190 1149 97 β-Recall 1020 1000 98 1171 1119 96 CT 1020 988 97 1093 922 84 ACS 1020 1000 98 N/A ApEn 1020 1000 98 N/A Authenticity 1020 998 98 1197 1152 96 Autocorrelation 1020 1000 98 N/A C2ST 1020 996 98 N/A CAS 631 619 98 N/A Context-FID 1020 999 98 1159 895 77 Coverage 1020 1000 98 1100 1073 98 DOMIAS 1020 1 0 1038 89 9 Density 1020 999 98 1088 1064 98 Detection_GMM 1020 998 98 1061 921 87 Detection_MLP 1020 996 98 1190 1136 95 Detection_XGB 1020 1000 98 1083 1042 96 Detection_linear 1020 998 98 1086 1063 98 Discriminative score 1020 996 98 N/A Distr. metric 1020 999 98 N/A FBCA 1020 1000 98 1440 1424 99 ICD 1020 1000 98 N/A INND 1020 1000 98 N/A Improved precision 1020 999 98 1184 957 81 Improved recall 1020 999 98 1186 967 82 JSD 1020 999 98 1078 1056 98 KLD 1020 996 98 1060 1038 98 MTop-Div 1020 1000 98 1209 1160 96 Max-RTS 1020 999 98 1164 1040 89 NDB 1020 991 97 1071 1014 95 NDB-over/under 1020 924 91 1076 1053 98 ONND 1020 1000 98 N/A Predictive score 1020 999 98 N/A RTS 1021 999 98 1158 1112 96 STS 1020 998 98 1074 1053 98 Sig-MMD 1020 691 68 N/A Spatial correlation 1020 998 98 N/A TRTS 1020 992 97 N/A TSTR 1020 998 98 N/A Temporal correlation 1020 1000 98 N/A WCS 1020 880 86 N/A WD 1020 997 98 1078 1058 98 Total 41432 39044 94 27234 24557 90 23 Table 5: Reasons for test failure and the number of such failures for each experiment. Note that the
https://arxiv.org/abs/2505.21160v1
overall number of tests also varies from experiment to experiment. Failure Main Embedders CUDA Out of Memory 1061 1369 Memory limit of 100 GB exceeded 7 123 Time limit of 120 minutes exceeded 1037 707 NDB-over/under: Too many cells in partition/No samples in a cell 81 0 Other CUDA/CUDNN Runtime error 0 0 Non-CUDA Runtime error 0 0 DOMIAS: BNAF density estimator produced illegal values. 191 114 CT: Cell xis missing test or training samples. 9 105 Context-FID: Imaginary component in fréchet distance calculation 0 249 Detection_GMM: Fitting the mixture model failed 0 10 Spatial correlation: Cannot compute Pearson correlation for any of the given samples 2 0 24 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Authenticity (0.052) Autocorrelation (0.083) CT (0.16) NDB-over/under (0.29) Spatial correlation (0.31) Discriminative score (0.33) Max-RTS (0.38) CAS (0.4) ACS (0.42) TSTR (0.43) ICD (0.43) ApEn (0.46) T emporal correlation (0.46) NDB (0.47) Detection_XGB (0.53) Sig-MMD (0.53) Predictive score (0.54) TRTS (0.55) Improved precision (0.56) FBCA (0.58)(0.78) -Precision (0.77) WCS (0.77) Coverage (0.74) Detection_MLP (0.73) Density (0.71) Improved recall (0.71) ONND (0.7) INND (0.66) C2ST (0.66) Detection_linear (0.64) Detection_GMM (0.63) STS (0.62) WD (0.62) JSD (0.61) -Recall (0.6) KLD (0.6) RTS (0.6) MT op-Div (0.6) Context-FID (0.59) Distr. metricCritical difference diagram of the reliability of tested measures Figure 3: Critical difference diagram for reliability indicator rrelin category fidelity as part of Main . The horizontal axis at the top depicts rrel. Additional horizontal bars connect groups of measures with no significantly different rrelvalue. 25 0.1 0.2 0.3 0.4 0.5 0.6 0.7 NDB-over/under (0.098) Sig-MMD (0.16) Distr. metric (0.19) Context-FID (0.19) Detection_XGB (0.2) FBCA (0.22) Spatial correlation (0.22) JSD (0.23) WD (0.24) T emporal correlation (0.24) ApEn (0.24) Detection_linear (0.24) KLD (0.25) -Recall (0.25) Coverage (0.26) Improved precision (0.29) Discriminative score (0.29) Improved recall (0.29) NDB (0.29) RTS (0.3)(0.73) ACS (0.68) Max-RTS (0.62) Autocorrelation (0.62) STS (0.61) ICD (0.57) C2ST (0.53) Authenticity (0.5) Detection_GMM (0.45) WCS (0.43) CT (0.42) Detection_MLP (0.39) MT op-Div (0.39) CAS (0.38) Predictive score (0.36) -Precision (0.34) INND (0.33) ONND (0.33) TSTR (0.32) Density (0.32) TRTSCritical difference diagram of the reliability of tested measures Figure 4: Critical difference diagram for reliability indicator rrelin category Generalization as part of Main . The horizontal axis at the top depicts rrel. Additional horizontal bars connect groups of measures with no significantly different rrelvalue. 26 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 NDB-over/under (0.075) Sig-MMD (0.16) Coverage (0.17) Improved recall (0.19) NDB (0.2) Detection_linear (0.21) -Recall (0.21) Detection_XGB (0.22) Improved precision (0.22) Distr. metric (0.22) WD (0.23) JSD (0.24) Context-FID (0.25) KLD (0.26) Density (0.26) Spatial correlation (0.26) -Precision (0.26) FBCA (0.27) ONND (0.28) RTS (0.31)(0.77) Autocorrelation (0.68) ACS (0.67) Authenticity (0.57) ICD (0.52) Max-RTS (0.51) STS (0.47) C2ST (0.45) CT (0.44) Detection_GMM (0.42) TSTR (0.41) Predictive score (0.36) MT op-Div (0.35) Discriminative score (0.34) WCS (0.33) Detection_MLP (0.33) TRTS (0.32) ApEn (0.32) CAS (0.31) T emporal correlation (0.31) INNDCritical difference diagram of the reliability of tested measures Figure 5: Critical difference diagram for reliability indicator rrelin category privacy as part of Main . The horizontal
https://arxiv.org/abs/2505.21160v1
axis at the top depicts rrel. Additional horizontal bars connect groups of measures with no significantly different rrelvalue. 27 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Authenticity (0.098) CT (0.11) Autocorrelation (0.14) Max-RTS (0.23) NDB-over/under (0.31) ACS (0.35) Spatial correlation (0.35) Discriminative score (0.38) ICD (0.39) CAS (0.4) NDB (0.41) TSTR (0.5) Improved precision (0.5) ApEn (0.54) T emporal correlation (0.54) Predictive score (0.57) STS (0.58) Sig-MMD (0.58) TRTS (0.59) Detection_GMM (0.59)(0.75) -Precision (0.72) Coverage (0.71) WCS (0.7) Detection_MLP (0.7) Density (0.69) Improved recall (0.69) ONND (0.68) INND (0.68) Context-FID (0.67) Detection_linear (0.66) RTS (0.66) Distr. metric (0.65) FBCA (0.65) JSD (0.65) WD (0.64) KLD (0.6) C2ST (0.6) Detection_XGB (0.6) MT op-Div (0.6) -Recall Critical difference diagram of the reliability of tested measures Figure 6: Critical difference diagram for reliability indicator rrelin category Representativeness as part of Main . The horizontal axis at the top depicts rrel. Additional horizontal bars connect groups of measures with no significantly different rrelvalue. 28 Table 6: Measure reliability ranking for experiment Main . This is an alternative presentation of Table 1, ranking the measures in each of the four categories by rrel. For ease of use, mean and standard deviation (Mean ±StD) are provided with each occurrence of the measure. DOMIAS is again excluded and placed at the bottom. Fidelity Generalization Privacy Representativeness 1α-Precision .783±.305 ACS .726±.320 Autocorrelation .773±.306 α-Precision .746±.314 2 WCS .774±.274 Max-RTS .684±.418 ACS .676±.313 Coverage .717±.360 3 Coverage .770±.323 Autocorrelation .624±.417 Authenticity .667±.391 WCS .713±.278 4 Detection_MLP .739±.246 STS .616±.383 ICD .571±.303 Detection_MLP .703±.236 5 Density .731±.368 ICD .615±.314 Max-RTS .524±.436 Density .696±.362 6 Improved recall .715±.339 C2ST .565±.203 STS .506±.362 Improved recall .691±.344 7 ONND .713±.332 Authenticity .530±.442 C2ST .475±.102 ONND .689±.322 8 INND .697±.301 Detection_GMM .504±.269 CT .448±.439 INND .683±.298 9 C2ST .660±.194 WCS .448±.329 Detection_GMM .438±.200 Context-FID .678±.406 10 Detection_linear .659±.366 CT .433±.459 TSTR .415±.184 Detection_linear .674±.345 11 Detection_GMM .641±.278 Detection_MLP .418±.300 Predictive score .409±.186 RTS .656±.348 12 STS .630±.415 MTop-Div .394±.320 MTop-Div .355±.299 Distr. metric .656±.405 13 WD .617±.404 CAS .387±.258 Discriminative score .348±.204 FBCA .650±.392 14 JSD .617±.421 Predictive score .384±.207 WCS .337±.225 JSD .649±.393 15 β-Recall .608±.435 α-Precision .358±.381 Detection_MLP .333±.212 WD .647±.376 16 KLD .602±.408 INND .336±.296 TRTS .330±.339 KLD .638±.380 17 RTS .601±.388 ONND .333±.320 ApEn .324±.244 C2ST .603±.170 18 MTop-Div .601±.318 TSTR .332±.228 CAS .320±.197 MTop-Div .601±.311 19 Context-FID .595±.455 Density .324±.415 Temporal correlation .314±.229 Detection_XGB .600±.401 20 Distr. metric .594±.438 TRTS .324±.358 INND .310±.273 β-Recall .599±.425 21 FBCA .576±.432 RTS .300±.334 RTS .305±.307 Detection_GMM .590±.257 22 Improved precision .556±.423 NDB .291±.343 ONND .283±.259 TRTS .586±.397 23 TRTS .551±.416 Improved recall .290±.386 FBCA .266±.355 Sig-MMD .584±.428 24 Predictive score .542±.258 Discriminative score .286±.229 α-Precision .261±.311 STS .578±.398 25 Sig-MMD .532±.440 Improved precision .286±.409 Spatial correlation .258±.256 Predictive score .570±.229 26 Detection_XGB .530±.424 Coverage .263±.390 Density .257±.353 Temporal correlation .542±.342 27 NDB .473±.384 β-Recall .253±.397 KLD .256±.319 ApEn .539±.347 28 Temporal correlation .462±.382 KLD .251±.337 Context-FID .246±.365 Improved precision .504±.417 29 ApEn .460±.385 Detection_linear .244±.361 JSD .239±.328 TSTR .501±.271 30 ICD .433±.350 ApEn .244±.247 WD .233±.302 NDB .415±.369 31 TSTR .429±.309 Temporal correlation .243±.241
https://arxiv.org/abs/2505.21160v1
Distr. metric .222±.334 CAS .396±.259 32 ACS .416±.413 WD .237±.330 Improved precision .222±.354 ICD .395±.316 33 CAS .404±.271 JSD .233±.346 Detection_XGB .220±.331 Discriminative score .379±.252 34 Max-RTS .382±.458 Spatial correlation .216±.252 β-Recall .210±.349 Spatial correlation .352±.346 35 Discriminative score .326±.274 FBCA .215±.341 Detection_linear .209±.328 ACS .347±.369 36 Spatial correlation .307±.348 Detection_XGB .204±.326 NDB .197±.255 NDB-over/under .313±.277 37 NDB-over/under .295±.280 Context-FID .189±.348 Improved recall .190±.308 Max-RTS .231±.389 38 C T .158±.314 Distr. metric .188±.325 Coverage .165±.311 Autocorrelation .141±.225 39 Autocorrelation .083±.165 Sig-MMD .158±.265 Sig-MMD .163±.263 CT .115±.216 40 Authenticity .052±.124 NDB-over/under .098±.160 NDB-over/under .075±.124 Authenticity .098±.189 - DOMIAS 1.±.000 DOMIAS .000±.000 DOMIAS .000±.000 DOMIAS 1.±.000 29 Table 7: Measure Consistency indicators for experiment Main . For each measure, we list rcon computed for both changing dataset and random seed. The lower rcon, the more the measure scores sway with the choice of parameter. 1.0means equal reliability on all datasets/random seeds. Fidelity Generalization Privacy Representativeness Measure Dataset Seed Dataset Seed Dataset Seed Dataset Seed α-Precision .533 1. .400 1. .400 1. .489 1. β-Recall .422 1. .444 1. .511 1. .444 1. CT .467 1. .311 1. .311 1. .467 1. ACS .133 1. .267 1. .133 1. .111 1. ApEn .244 1. .333 1. .400 1. .400 1. Authenticity .600 1. .400 1. .444 1. .422 1. Autocorrelation .889 1. .400 1. .356 1. .511 1. C2ST .422 1. .289 1. .444 1. .511 1. CAS .200 1. .400 1. .400 1. .400 1. Context-FID .467 1. .822 1. .578 1. .711 1. Coverage .200 1. .289 1. .444 1. .289 1. Density .244 1. .178 1. .200 1. .244 1. Detection_GMM .422 .689 .622 .533 .800 .533 .311 .689 Detection_MLP .333 1. .578 1. .756 .978 .289 1. Detection_XGB .444 1. .511 1. .578 1. .422 1. Detection_linear .156 1. .511 1. .511 1. .178 1. Discriminative score .422 1. .333 1. .333 .956 .378 .978 Distr. metric .511 1. .733 1. .600 1. .511 1. FBCA .556 1. .778 1. .689 1. .756 1. ICD .111 1. .222 1. .267 1. .133 1. INND .289 1. .444 1. .400 1. .267 1. Improved precision .133 1. .400 1. .400 1. .156 1. Improved recall .178 1. .578 1. .600 1. .156 1. JSD .844 1. .667 1. .600 1. .622 1. KLD .822 .956 .800 1. .933 1. .711 1. MTop-Div .200 1. .289 1. .311 1. .222 1. Max-RTS .444 1. .511 1. .356 1. .578 1. NDB .222 1. .511 .933 .422 .933 .244 1. NDB-over/under .378 1. .467 1. .489 1. .422 1. ONND .333 1. .467 1. .533 1. .333 1. Predictive score .178 1. .511 1. .667 1. .422 1. RTS .311 1. .356 1. .489 1. .400 1. STS .244 1. .311 1. .267 1. .267 1. Sig-MMD .472 1. .417 1. .444 1. .528 1. Spatial correlation .111 1. .244 1. .200 1. .244 1. TRTS .378 1. .333 1. .222 1. .244 1. TSTR .489 1. .578 1. .667 1. .822 .978 Temporal correlation .378 1. .356 1. .556 1. .444 1. WCS
https://arxiv.org/abs/2505.21160v1
.306 1. .472 1. .500 1. .361 1. WD .711 .978 .622 1. .556 1. .556 1. DOMIAS N/A 30 Table 8: Running time of the measure executions recorded on the Main experiment sorted by average rank. The listed values are the average time required to apply the measure to given data across multiple modulation steps and tests. All values are rounded to seconds. Asterisks (*) indicate embedder-dependence, the embedding time is excluded. N/A indicates the absence of successful tests. Measure Appliances energy ElectricDevices Exchange rate Google stock PPG and respiration PTB diagnostic ECG Sine StarLightCurves UniMiB SHAR Wikipedia web traffic 1Temporal correlation 0 0 0 0 0 0 0 0 0 0 2Spatial correlation 0 0 0 0 0 0 0 0 0 0 3Context-FID* 0 0 0 0 0 0 0 0 0 0 4FBCA* 0 0 0 0 0 0 0 0 0 0 5Improved recall* 0 0 0 0 0 0 0 0 0 1 6Improved precision* 0 0 0 0 0 0 0 0 0 1 7Detection_linear* 0 0 0 0 0 0 0 0 0 1 8Distr. metric 0 0 0 0 0 3 0 0 0 0 9JSD* 1 0 0 0 0 1 0 0 0 6 10 WD* 1 1 0 0 0 1 0 0 0 7 11 KLD* 1 1 0 0 0 2 0 0 0 6 12 ACS 2 0 0 0 1 28 0 0 0 7 13 NDB-over/under* 1 0 0 0 1 3 0 0 1 4 14 Sig-MMD 1 1 0 0 1 N/A 0 1 0 2 15 NDB* 1 1 0 0 1 2 0 0 0 9 16 Autocorrelation 5 0 0 0 1 660 0 1 0 12 17 CT* 1 1 0 0 1 3 0 0 1 6 18 Density* 1 2 0 0 2 3 1 0 1 3 19 ApEn 4 0 1 1 1 16 0 1 0 0 20 Coverage* 1 1 0 0 2 3 1 1 1 3 21 Detection_XGB* 1 2 1 1 2 4 1 1 2 6 22 Max-RTS* 3 3 1 0 3 12 1 1 2 30 23 STS* 4 3 1 1 4 11 2 2 2 21 24 ICD 3 3 3 3 3 11 3 12 3 4 25 INND 3 3 3 3 3 12 3 11 3 4 26 ONND 3 3 3 3 3 11 3 11 3 4 27 RTS* 5 4 2 1 5 18 2 2 3 49 28 Detection_GMM* 8 11 2 1 17 49 4 3 3 71 29 β-Recall* 9 8 4 2 15 33 6 5 6 67 30 α-Precision* 9 9 3 2 14 40 5 5 7 70 31 Authenticity* 8 10 4 2 11 42 5 4 6 77 32 TRTS 25 11 9 4 26 59 12 14 15 88 33 Predictive score 26 10 11 5 24 63 11 13 16 87 34 Discriminative score 25 23 9 5 36 124 16 22 19 216 35 C2ST 22 23 10
https://arxiv.org/abs/2505.21160v1
5 40 125 16 22 20 219 36 Detection_MLP* 34 32 12 6 43 85 18 17 20 157 37 MTop-Div* 100 38 54 40 40 42 38 42 40 40 38 TSTR 46 20 19 8 51 122 23 25 31 168 39 WCS 430 9 26 18 54 N/A 19 130 48 77 40 CAS N/A 80 N/A N/A N/A 466 30 41 90 N/A 41 DOMIAS* N/A 31 Table 9: Running time statistics of the measure executions extending Table 8 for Appliances energy ,ElectricDevices ,Exchange rate ,Google stock , and PPG and respiration . In addition to the average running time (Mean), we provide for each dataset and measure the standard deviation (StD) of the measurements, the number of complete, un-aided executions (Valid), and the cache-aided executions (Cached). Aided means that the execution was accelerated by the use of previously computed and cached artifacts. Running times for DOMIAS not available. Measure Appliances energy ElectricDevices Exchange rate Google stock PPG and respiration Mean StD Valid Cached Mean StD Valid Cached Mean StD Valid Cached Mean StD Valid Cached Mean StD Valid Cached 1Temporal correlation 0 0 20 970 0 0 20 1190 0 0 20 970 0 0 20 970 0 0 20 970 2Spatial correlation 0 0 20 970 0 0 20 1190 0 0 20 970 0 0 20 948 0 0 20 970 3Improved recall 0 0 990 0 0 01210 0 0 0 990 0 0 0 990 0 0 0 990 0 4Context-FID 0 0 990 0 0 01210 0 0 0 990 0 0 0 990 0 0 0 990 0 5FBCA 0 0 20 970 0 0 20 1190 0 0 20 970 0 0 20 970 0 0 20 970 6Improved precision 0 0 20 970 0 0 20 1190 0 0 20 970 0 0 20 970 0 0 20 970 7Distr. metric 0 0 20 970 0 0 20 1190 0 0 20 970 0 0 20 970 0 0 20 970 8Detection_linear 0 0 990 0 0 01210 0 0 0 990 0 0 0 990 0 0 0 990 0 9JSD 1 0 20 959 0 0 20 1190 0 0 20 970 0 0 20 970 0 0 20 970 10ACS 2 0 990 0 0 01210 0 0 0 990 0 0 0 990 0 1 0 990 0 11Autocorrelation 5 2 20 970 0 0 20 1190 0 0 20 970 0 0 20 970 1 0 20 970 12WD 1 0 20 948 1 0 20 1190 0 0 20 970 0 0 20 970 0 0 20 970 13KLD 1 0 20 970 1 0 20 1190 0 0 20 970 0 0 20 970 0 0 20 970 14NDB 1 0 20 970 1 0 20 1190 0 0 20 970 0 0 20 970 1 0 20 970 15Sig-MMD 1 0 737 0 1 01210 0 0 0 990 0 0 0 990 0 1 0 473 0 16NDB-over/under 1 0 20 849 0 0 20 1190 0 0 20 871
https://arxiv.org/abs/2505.21160v1
0 0 20 849 1 0 20 948 17CT 1 0 20 937 1 0 20 1190 0 0 20 970 0 0 20 904 1 0 20 970 18Density 1 0 10 980 2 1 8 1202 0 0 11 979 0 0 9 981 2 1 11 979 19Coverage 1 0 10 980 1 1 12 1198 0 0 9 981 0 0 11 979 2 1 9 981 20ApEn 4 0 20 970 0 0 20 1190 1 0 20 970 1 0 20 970 1 0 20 970 21Detection_XGB 1 1 990 0 2 01210 0 1 0 990 0 1 0 990 0 2 0 990 0 22Max-RTS 3 4 990 0 3 31210 0 1 1 990 0 0 1 990 0 3 4 990 0 23STS 4 1 968 0 3 11210 0 1 0 990 0 1 0 990 0 4 1 990 0 24ICD 3 0 990 0 3 01210 0 3 0 990 0 3 0 990 0 3 0 990 0 25INND 3 0 990 0 3 01210 0 3 0 990 0 3 0 990 0 3 0 990 0 26RTS 5 1 20 970 4 1 20 1190 2 0 20 970 1 0 20 970 5 1 20 959 27ONND 3 0 990 0 3 11210 0 3 0 990 0 3 1 990 0 3 1 990 0 28Detection_GMM 8 7 990 0 11 71199 0 2 2 990 0 1 1 990 0 17 10 990 0 29α-Precision 9 2 8 982 9 2 6 1204 3 1 6 984 2 0 9 981 14 6 4 986 30β-Recall 9 2 5 985 8 2 4 1206 4 1 8 982 2 0 6 984 15 5 8 982 31Authenticity 8 2 7 972 10 2 10 1200 4 1 6 984 2 0 5 985 11 2 8 982 32TRTS 25 5 20 970 11 2 20 1190 9 2 20 970 4 1 20 970 26 7 20 970 33Predictive score 26 11 979 0 10 21210 0 11 4 990 0 5 3 990 0 24 6 990 0 34C2ST 22 6 979 0 23 14 1210 0 10 6 990 0 5 2 990 0 40 28 990 0 35Discriminative score 25 9 990 0 23 12 1210 0 9 5 979 0 5 2 990 0 36 27 990 0 36Detection_MLP 34 23 990 0 32 21 1199 0 12 7 990 0 6 3 990 0 43 27 979 0 37TSTR 46 9 20 970 20 4 20 1190 19 4 20 970 8 1 20 970 51 10 20 970 38WCS 430 5 880 0 9 21210 0 26 2 990 0 18 2 990 0 54 6 990 0 39MTop-Div 100 90 990 0 38 71210 0 54 62 990 0 40 11 990 0 40 7 990 0 40CAS NaN NaN NaN NaN 80 27 20 1300 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN 32 Table 10: Running time statistics of the
https://arxiv.org/abs/2505.21160v1
measure executions extending Table 8 for PTB diagnostic ECG ,Sine,StarLightCurves ,UniMiB SHAR , and Wikipedia web traffic . In addition to the average running time (Mean), we provide for each dataset and measure the standard deviation (StD) of the measurements, the number of complete, un-aided executions (Valid), and the cache-aided executions (Cached). Aided means that the execution was accelerated by the use of previously computed and cached artifacts. Running times for DOMIAS not available. Measure PTB diagnostic ECG Sine StarLightCurves UniMiB SHAR Wikipedia web traffic Mean StD Valid Cached Mean StD Valid Cached Mean StD Valid Cached Mean StD Valid Cached Mean StD Valid Cached 1Spatial correlation 0 0 19 1191 0 0 20 1300 0 0 20 1190 0 0 20 1300 0 0 19 751 2Temporal correlation 0 0 35 1175 0 0 20 1300 0 0 20 1190 0 0 20 1300 0 0 16 754 3Context-FID 0 01210 0 0 01320 0 0 01199 0 0 01320 0 0 0 770 0 4Improved recall 0 01210 0 0 01320 0 0 01210 0 0 01320 0 1 0 759 0 5FBCA 0 0 25 1185 0 0 20 1300 0 0 20 1190 0 0 20 1300 0 0 16 754 6Improved precision 0 0 20 1190 0 0 20 1300 0 0 20 1179 0 0 20 1300 1 0 20 750 7Detection_linear 0 01188 0 0 01320 0 0 01210 0 0 01320 0 1 1 770 0 8Distr. metric 3 1 17 1182 0 0 20 1300 0 0 20 1190 0 0 20 1300 0 0 32 738 9JSD 1 1 19 1191 0 0 20 1300 0 0 20 1190 0 0 20 1300 6 2 18 752 10WD 1 0 23 1187 0 0 20 1300 0 0 20 1190 0 0 20 1300 7 6 17 742 11KLD 2 1 23 1176 0 0 20 1300 0 0 20 1179 0 0 20 1300 6 3 17 731 12NDB-over/under 3 1 20 750 0 0 20 1300 0 0 20 1179 1 0 20 1300 4 1 18 730 13ApEn 16 7 20 1190 0 0 20 1300 1 0 20 1190 0 0 20 1300 0 0 20 750 14Sig-MMD N/A N/A N/A N/A 0 01320 0 1 0 330 0 0 01320 0 2 0 231 0 15Density 3 0 9 1201 1 0 13 1307 0 0 10 1189 1 0 13 1307 3 0 14 756 16NDB 2 1 17 1171 0 0 20 1300 0 0 20 1168 0 0 20 1300 9 4 18 697 17Coverage 3 0 11 1199 1 0 7 1313 1 0 10 1200 1 0 7 1313 3 0 20 750 18ACS 28 26 1210 0 0 01320 0 0 01210 0 0 01320 0 7 30 770 0 19CT 3 1 18 1181 0 0 20 1300 0 0 20 1179 1 0 20 1300 6 2 19 740 20Autocorrelation 660 165 37 1173 0 0 20 1300 1 1 20 1190 0 0 20 1300 12 6 19
https://arxiv.org/abs/2505.21160v1
751 21Detection_XGB 4 11210 0 1 01320 0 1 01210 0 2 01320 0 6 2 770 0 22Max-RTS 12 11 1210 0 1 11320 0 1 11210 0 2 11320 0 30 20 759 0 23STS 11 21210 0 2 01320 0 2 01210 0 2 01320 0 21 4 770 0 24ICD 11 71210 0 3 01320 0 12 10 1210 0 3 11320 0 4 0 770 0 25ONND 11 81210 0 3 01320 0 11 10 1210 0 3 11320 0 4 0 770 0 26INND 12 91210 0 3 01320 0 11 10 1210 0 3 01320 0 4 0 770 0 27RTS 18 5 20 1190 2 0 20 1300 2 0 20 1190 3 1 20 1300 49 19 18 752 28Detection_GMM 49 25 1210 0 4 31320 0 3 11199 0 3 11320 0 71 36 770 0 29β-Recall 33 9 8 1202 6 1 7 1313 5 1 7 1203 6 1 9 1311 67 19 7 763 30Authenticity 42 16 6 1204 5 1 5 1315 4 1 4 1195 6 2 5 1315 77 26 7 763 31α-Precision 40 7 6 1193 5 1 8 1312 5 1 9 1190 7 1 6 1314 70 16 6 764 32Predictive score 63 18 1210 0 11 41320 0 13 51210 0 16 61320 0 87 19 770 0 33TRTS 59 14 14 1119 12 3 20 1300 14 4 20 1179 15 4 20 1300 88 19 17 753 34MTop-Div 42 61210 0 38 14 1320 0 42 71210 0 40 61320 0 40 5 770 0 35Detection_MLP 85 45 1210 0 18 11 1320 0 17 12 1144 66 20 14 1320 0 157 107 748 0 36Discriminative score 124 71 1199 0 16 91320 0 22 11 1210 0 19 11 1320 0 216 129 748 0 37C2ST 125 72 1199 0 16 91320 0 22 11 1199 0 20 13 1320 0 219 126 759 0 38TSTR 122 27 17 1193 23 6 20 1289 25 8 20 1190 31 9 20 1300 168 38 19 740 39WCS N/A N/A N/A N/A 19 31320 0 130 71210 0 48 31320 0 77 3 770 0 40CAS 466 163 17 1292 30 5 20 1410 41 16 20 1300 90 22 20 1410 N/A N/A N/A N/A 33 Table 11: Running time of the embedder usage recorded on the Main and Embedders experiments sorted by average rank. The listed values are the average time required to employ the embedder across multiple tests, combining inference and training. All values are rounded to seconds. Embedding Appliances energy ElectricDevices Exchange rate Google stock PPG and respiration PTB diagnostic ECG Sine StarLightCurves UniMiB SHAR Wikipedia web traffic 1 Concat 0 0 0 0 0 0 0 0 0 0 2 Catch22 99 2 3 1 14 889 3 12 7 86 3 TS2Vec 327 391 41 38 515 3697 252 640 256 4812 G Measure Selection Guide Ultimately, STEB analyzes and compares synthetic TS evaluation measures to allow users a better selection of measures.
https://arxiv.org/abs/2505.21160v1
Below, we provide some step-by-step instructions on how this selection process can look like using STEB output. 1.Determine which categories are relevant for the given use case. Usually, one measure is needed per category. For each category, repeat all following steps. 2. Start with the measure ranked highest reliability (see Table 6). 3.If the measure lacks consistency in this category (see Table 7), move on to the next best measure. We suggest a minimum of rcon= 0.5. 4.If the use case dictates running time constraints, for instance, because many parametrizations of a generative model must be evaluated during optimization, measures with excessive running time should be skipped (see Table 8 in combination with Table 11). For this purpose, the running time on the dataset most similar in size, TS length, and number of feature channels to the use case’s target dataset should be considered. 5.Further, in case ease-of-use is an important selection criterion, it makes sense to skip embedder-dependent measures or only use them in case embeddings can be reused in other categories. Similarly, some measures are prone to errors such as DOMIAS and CT, which requires additional effort and knowledge to fix (see Table 4 and Table 5). Based on the experimental results in this paper and assuming running time is no limiting factor, the best measure suit for synthetic TS evaluation currently comprises •α-Precision for fidelity, •ACS for generalization, •Autocorrelation for privacy, and •Context-FID for representativeness. 34 Table 12: Running time statistics of the embedder usage extending Table 11. In addition to the average running time (Mean), we provide for each dataset and measure the standard deviation (StD) of the measurements, the number of complete, the number of completed, un-aided executions (Valid), and the cache-aided executions (Cached). Aided means that the model training was skipped due to loading a cached pre-trained model. Dataset Embedding Mean StD Valid Cached Concat 0 0 4543 0 Appliances energy Catch22 99 26 7442 0 TS2Vec 327 125 40 22110 Concat 0 0 9099 0 ElectricDevices Catch22 2 2 9855 0 TS2Vec 391 107 40 27368 Concat 0 0 4574 0 Exchange rate Catch22 3 2 7859 0 TS2Vec 41 16 41 22220 Concat 0 0 4193 0 Google stock Catch22 1 2 7221 0 TS2Vec 38 20 40 22154 Concat 0 0 5537 0 PPG and respiration Catch22 14 4 7817 0 TS2Vec 515 259 40 22286 Concat 0 0 11700 0 PTB diagnostic ECG Catch22 889 162 10394 0 TS2Vec 3697 1035 39 26884 Concat 0 0 10071 0 Sine Catch22 3 2 10682 0 TS2Vec 252 94 40 29920 Concat 0 0 9727 0 StarLightCurves Catch22 12 6 10622 0 TS2Vec 640 364 40 27269 Concat 0 0 5717 0 UniMiB SHAR Catch22 7 3 10633 0 TS2Vec 256 113 41 29909 Concat 0 0 7509 0 Wikipedia web traffic Catch22 86 23 5182 0 TS2Vec 4812 1475 36 17149 35
https://arxiv.org/abs/2505.21160v1
arXiv:2505.21170v1 [quant-ph] 27 May 2025Quantum AIXI: Universal Intelligence via Quantum Information Elija Perrier1[0000 −0002−6052−6798] Centre for Quantum Software & Information, UTS and Gradient Institute, Sydney elija.perrier@gmail.com Abstract. AIXI is a widely studied model of artificial general intelligence (AGI) based upon principles of induction and reinforcement learning. However, AIXI is fundamentally classical in nature - as are the environments in which it is modelled. Given the universe is quantum mechanical in nature and the exponential overhead required to simulate quantum mechanical systems classically, the question arises as to whether there are quantum mechanical analogues of AIXI which are theoretically consistent or practically feasible as models of universal in- telligence. To address this question, we extend the framework to quantum information and present Quantum AIXI (QAIXI). We introduce a model of quantum agent/environment in- teraction based upon quantum and classical registers and channels, showing how quantum AIXI agents may take both classical and quantum actions. We formulate the key components of AIXI in quantum information terms, extending previous research on quantum Kolmogorov complexity and a QAIXI value function. We discuss conditions and limitations upon quantum Solomonoff induction and show how contextuality fundamentally affects QAIXI models. Keywords: Quantum ·AIXI·Universal Intelligence ·Complexity. 1 Introduction A cornerstone in the theoretical landscape of AGI is the AIXI model [33,34], which provides a mathematically rigorous and complete framework for an optimal Bayesian reinforcement learning agent. AIXI’s optimality is rooted in Solomonoff’s theory of universal induction [50] and classical computability theory. However, AIXI and its underlying assumptions are fundamentally classical. Given that our universe is governed by quantum mechanics, a fundamental question arises: what constitutesuniversalintelligenceinaquantummechanicalworld?Thispaperaddressesthisquestion by developing the mathematical foundations for Quantum AIXI (QAIXI), an AGI model operat- ing within the principles of quantum mechanics. Using the formal theory of quantum information processing, we extend prior work on quantum intelligence [15,44] and situate QAIXI as an agent anchored in both quantum and classical registers interacting with the environment via quantum and classical channels. Using this formulation, we contribute the following: (a) Describing the QAIXI agent’s interaction loop using density operators and quantum measure- ment theory along with specifying quantum Kolmogorov complexity ( KQ) [10,53] in terms of quantum information channels and its role in a universal quantum prior. (b) Utilising the theory of quantum-to-quantum (QTQ) and quantum-to-classical channels [56] to describe the space of possible quantum environments. (c) Formulating a quantum equivalent of the universal Bayesian mixture over quantum environ- ments while adapting a quantum Bellman equation and the QAIXI policy based on quantum formalism. 2 E. Perrier (d) Describing circumstances in which the Kochen-Specker theorem [38] renders any QAIXI history fundamentally contextual. As we discuss below, QAIXI is impractical to implement. Our contribution therefore aims to con- tribute to debate over the theoretical models of optimal intelligence that account for the fundamen- tallyquantumnatureoftheuniverse,particularlylimitationsstemmingfromtheinherentproperties of quantum state spaces, quantum computation, and quantum measurement. 1.1 Related Work AIXI [33–35] is among the leading AGI proposals at the centre of debates over AGI, as both a model of universal intelligence and a proposal against which other proposals are assessed. It takes Solomonoff’s
https://arxiv.org/abs/2505.21170v1
universal induction, folds it into sequential decision theory, and produces what is ar- guably the cleanest statement of an unbounded optimal agent. Subsequent symbolic, connectionist, and hybrid AGI proposals typically position themselves by (i) trying to approximate AIXI in prac- tice (e.g. AIXItl, MC-AIXI, information-geometric variants) or (ii) critiquing AIXI’s reliance on classical assumptions and incomputable quantities. Logicist programmes [9], emergentist lines [51], distributed cognitive architectures [26,27], and self-organising approaches [41] all share AIXI’s basic agent–environment model, albeit while extending, varying or removing its component Bayesian ma- chinery. AIXI has been subject to critical review over the last two decades on a number of grounds: (1) physical unrealisability or super-Turing requirements [39,55]; (2) the Cartesian boundary be- tween software and hardware [7,8]; (3) inadequate treatment of resource constraints, counterfactual reasoning or multi-agent reflection [23,49]; and (4) challenges to the Kolmogorov-based prior it- self [52]. Even so, AIXI remains a cornerstone of classical AGI (CAGI) theories. The advent of quantum information technologies (QIT) [2,43] and interest in their computational capabilities and quantum forms of AGI (QAGI) poses a natural question for AIXI models: if AIXI purports to be a universal agent in a classical world, what replaces it in a quantum universe? Most quantum-AI research to date is narrowly cast, focusing on specific technical features such as quantum decision theory, quantum machine learning [11,16,21,45,48], quantum reinforcement learning [20,36,42], and hybrid variational schemes [46,47]. These propose access to quantum algorithms providing prospective quantum advantage (e.g. amplitude-estimation–based policy evaluation [40]), albeit with ongoing debate about their practical reach [3]. Attempts to synthesise AIXI with quantum mechanics (and quantum information processing) have been relatively limited. Work on Solomonoff induction [50] and quantum computation (e.g. Deutsch’squantumTuringmachines[18])haveconsideredcomponentfeaturesoftheAIXIprogram, including substituting out specific classical components for quantum analogues (e.g. replacing clas- sical with quantum Grover search [15]). Yet quantum mechanics operationally - and ontologically - carries profound differences from classical ontology (and computation). Quantum systems may sub- sist in superposition states, be entangled and lack definitive identity until measurement interaction. Moreover, any integration of quantum mechanics and AIXI must also reckon with seminal results in quantum foundations: contextuality, non-locality, and quantum measurement — all absent from classical AIXI. Work on quantum causality [14,57], algorithmic thermodynamics [22], and many- worlds decision theory [54] offer contributions on foundational quantum mechanical issues facing quantum AIXI-style agents whose quantum operations disturb the very environment they probe. At present, however, there is neither a consensus quantum prior (the role played by 2−K(ν)in AIXI) nor a tractable complexity class in which QAIXI could be approximated. 2. CLASSICAL AIXI 3 2 Classical AIXI The AIXI agent [33] is a theoretical model for a universally optimal reinforcement learning agent. It interacts with an unknown environment µin cycles. In each cycle t, the agent chooses an action at∈ Aand receives a percept et= (ot, rt)∈ O × [0,1], consisting of an observation otand a reward rt. The history is h<t=a1e1. . . a t−1et−1. AIXI’s policy πAIXIaims to maximise future expected rewards: at:=πAIXI(h<t) = arg max atX et···X em mX k=tγk−trk!X ν∈MUwνν(et, . . . , e m|h<tat. .
https://arxiv.org/abs/2505.21170v1
. a m),(1) where mis the lifetime of the agent, γ∈[0,1)is a discount factor, and the inner sum is over all environments νin a universal class MU(e.g., all chronological semi-computable environments Msol),weightedby wν= 2−K(ν).K(ν)istheKolmogorovcomplexityoftheclassicalTuringMachine (CTM) describing environment ν. The sum over environments constitutes Solomonoff’s universal prior ξU: ξU(e1:m||a1:m) :=X ν∈Msol2−K(ν)ν(e1:m||a1:m). (2) AIXI is Pareto-optimal and self-optimising in the sense of [33]. However, it is incomputable due to the use of K(·)and the sum over all CTMs. Its ontological assumptions are classical: deterministic or classically stochastic environments, objective histories, and classical information theory. 3 Quantum Computational Foundations 3.1 Quantum information To construct QAIXI, we must replace classical computational notions with their quantum counter- parts.However,thisisnotastraightforwardisomorphicmapping.Theuniquepropertiesofquantum mechanics problematise conventional assumptions underlying classical AIXI, such as agent identity, definitiveness of state and the separability and distinguishability of an agent from its environment. To formulate QAIXI in a way that accommodates and caters for these ontological differences, we turn to the language and formalism of quantum information theory [56]. In QIP, systems(agents) andenvironments are described in terms of registers X(e.g. bits) comprising information drawn from a classical alphabet Σ. Registers may be in either classical or quantum states. We define a Hilbert space X=C|Σ|with computational basis {|s⟩}s∈Σ. Aquantum state of a register X (associated with space X) is a density operator ρ∈ D(X), i.e., a positive semi-definite operator with Tr (ρ) = 1. Classical states are then ρwhich are diagonal in the basis of X. Interactions with (and changes to) states occur via channels which are superoperators. They define how quantum and classical states interact. Classical-to-classical registers (CTC) preserve classical states, classical- to-quantum (CTQ) channels encode classical information in quantum states, quantum-to-classical (QTC) channels extract classical information from quantum states, decohering them in the pro- cess; quantum-to-quantum (QTQ) channels form coherent (e.g. unitary) transformations between quantum registers. In this framing, both agents and environments are registers which may be CAGI (classical state sets) or QAGI (quantum state sets). They may interact in ways that are coherent (quantum-preserving) or classical (via CTC or QTC maps). 4 E. Perrier 3.2 Quantum AIXI The components of QAIXI can then be understood as follows. Let the QAIXI agent be associ- ated with a quantum register Arepresenting the internal degrees of freedom, while its environment is represented by a quantum register E. Their corresponding (finite–dimensional) complex Hilbert spaces are denoted by HAandHE. At interaction step tthe agent’s private state is a density operator ρ(t) A∈ D(HA)that may encode its complete history, its current belief state, or—in the ideal case—an explicit representation of the universal quantum mixture ΞQ. The environment is simultaneously described by ρ(t) E∈ D(HE). Taken together the composite system occupies the joint state ρ(t) AE∈ D(HA⊗ H E). Note that ρ(t) AEcan be entangled, so that ρ(t) AE̸=ρ(t) A⊗ρ(t) E. A history is therefore an operator-valued stochastic process rather than a sequence of point events. Actions .Ateachcycletheagentchoosesan action at,formallyacompletely–positive,trace–preserving (CPTP)mapacting ontheenvironmentregister(orona designatedsubsystemofthe jointregister). The two canonical cases are as follows. 1. First, when the agent performs a coherent control operation, atis realised by a unitary channel ΦUat:X7→UatX U† at, applied
https://arxiv.org/abs/2505.21170v1
either on HEalone or on HA⊗ H E. Such a map preserves superposition and entanglement and therefore belongs to the quantum-to-quantum (QTQ) class of channels. 2. Second, the agent may decide to interrogate the environment by means of a quantum in- strument Iat={Eat k}k∈Γobs, where the outcome alphabet Γobsis classical. Each branch map Eat k:L(HE′)→ L(HE′)is completely positive and trace non–increasing, satisfiesP kEat k=Eat (their sum is a CPTP map) and Eat† k(I) =I, and typically takes the form Eat k(X) =Mat kX Mat k† for a POVM {Mat k}. Because every branch deposits a classical record kwhile decohering the measured subsystem, Iatis a paradigmatic quantum-to-classical (QTC) channel. Formally the action atcan be represented via Φat:L(HAE)−→ L (HAE)which may be indexed by the form of L. Percepts and rewards . What the QAIXI agent perceives is determined entirely by the instrument it applies. A measurement outcome k∈Γobsbecomes the observation component otof the percept et= (ot, rt). The outcome is drawn with probability Pr ot=k|at, ρ(t−1) AE = Tr Eat k TrAρ(t−1) AE . The reward rt∈Γrew(with Γrew⊆Rin the standard reinforcement-learning setting) is computed by a classical post-processing function that may depend on both otand the agent’s prior internal state ρ(t−1) A. The resulting data otandrtare stored in designated registers - but it is at this stage classical data. So it can be stored in a classical register via a CTC map, or encoded in a quantum register via a CTQ map. Interaction loop . The QAIXI cycle of interaction with the environment is as follows. Given the pre-interaction state ρ(t−1) AE, the agent selects at. Ifatis a unitary, the composite state updates coherently to ρ(t) AE=ΦUat ρ(t−1) AE . If instead atis an instrument, an outcome kis observed with 3. QUANTUM COMPUTATIONAL FOUNDATIONS 5 the above probability, the environment collapses to ρ(t) E′=TrAρ(t) AE=Eat k TrAρ(t−1) AE Pr(k), (3) The globalpost-measurement state is ρ(t) AE=ρ(t−1) A⊗ρ(t) E′(assuming noentanglement -see Appendix A for the general case). Finally, the agent applies an internal CPTP map—its update rule—to obtain ρ(t) Afrom ρ(t−1) Ain the light of (at, ot, rt). This update may be trivial (identity) if ρ(t) Ais purely classicaloritmayitselfbeanon-trivialquantumchannelwhentheagentmaintainscoherentbeliefs. The agent’s internal memory is refreshed by a chosen CPTP rule Uint:ρ(t−1) A7→ρ(t) Athat may itself depend on (at, k, r t). This operational definition fixes the semantics of actions,percepts, andrewards in the quantum setting, providing the concrete substrate on which the universal mixture ξQand the QAIXI value functional are built. 3.3 Quantum Kolmogorov complexity QAIXI relies upon quantum (rather than classical) Kolmogorov complexity. Let Qsolbe the set of all chronological, semi–computable quantum environments, each such environment Q∈ Qsolbeing representedbyaQuantumTuringMachinethatoutputsaninstrumentsequence.An environment Q isaCPTPmapactingonaregister HE.ThroughoutweusetheChoi-Jamiolkowskiisomorphism[31] to identify Qwith the purified vector Q := (⊮⊗Q) Φ+ ∈ H⊗2 E,where |Φ+⟩=1√ dPd i=1|i i⟩for d= dim HE.Alltrace-distanceboundson |Q⟩translatetodiamond-normboundson Q.Itsquantum Kolmogorov complexity is: KQ(Q) := min p∈{0,1}⋆n |p| Uuniv|p⟩|0⟩ − |Q⟩ ⊗ |aux⟩ ≤εo , (4) for a universal QTM Uunivand fixed ε <1with Hilbert-Schmidt norm ||·||. Equation (4) therefore measures program length needed to approximate the channel Q. The|aux⟩term is an ancilla whose dimension is at most polynomial
https://arxiv.org/abs/2505.21170v1
in |p|. Here we have assumed that for any two universal QTMs U1, U2there exists a constant cU1,U2such that for every environment Q: KU1 Q(Q)−KU2 Q(Q) ≤cU1,U2. (5) The quantum Solomonoff mixture is the semi–density operator: ΞQ a1:m :=X Q∈Qsol2−KQ(Q)ρQ E(a1:m), (6) where ρQ E(a1:m)is the environment state generated by Qunder the action sequence a1:m. Projecting ΞQonto any classical POVM recovers the probability mixture ξQ(e1:m∥a1:m)that generalises (2). Whenever every admissible environment Qoutputs commuting observables and the QAIXI agent restricts itself to instruments diagonal in that basis, each ρQ E(a1:m)becomes a diagonal density operatorencodinganordinaryprobabilitymeasure.Inthatlimit KQ(Q)coincides(uptoanadditive constant) with the classical Kolmogorov complexity K(ν)of the induced CTM ν, and equations (6)– (9) reduce exactly to the classical Solomonoff prior ξUand the AIXI policy (1). Hence the quantum formulation strictly generalises the classical one. 6 E. Perrier 3.4 QAIXI value functional We also define the quantum analogue of the Bellman equation. For a policy πand environment Q define the discounted return Vπ Q ρ(t−1) AE =Eπ,QhmX k=tγk−trk ρ(t−1) AEi . (7) Becauseeachbranchmap Eat kbothyields otandupdates ρ(t) E(k),theexpectation Eπ,QPm k=tγk−trk is taken over the (non-Markovian) instrument-conditioned trajectory measure, not over an i.i.d. se- quence. It terminates at finite horizon m. Averaging over the universal prior we obtain: Vπ ΞQ ρ(t−1) AE =X Q2−KQ(Q)Vπ Q ρ(t−1) AE . (8) TheQAIXI policy maximises this functional at every step: at:=πQAIXI ρ(t−1) AE = arg max at∈ActEk∼pt(·|at)h rt(k) +γVQAIXI ΞQ ρ(t) AE(k)i . (9) where pt(k|at) = Tr Eat k Ξ(t) Q(a1:t−1) . Here ρ(t) AE(k)is the post-measurement state with prob- abilities as pt(k|at). The fix-point equation implicit in ((8)–9) is the quantum Bellman equation . The act of observation fundamentally alters ρEdue to quantum back-action. Note that as we are dealing with quantum trajectories, the classical notion of histories h<tmust now account for the sequence of quantum operations and classical outcomes. Moreover, because a universal QTM may not halt on superposed inputs and is itself not computable. The substitution K⇝KQleaves the mixture (6) just as incomputable as its classical counterpart and introduces an additional obstacle: evaluating ρQ E(a1:m)is in general computationally hard even for m= 1. However, preparing an arbitrary n-qubit quantum state |ψ⟩can require a quantum circuit of size exponential in n[37]. This implies that KQ(|ψ⟩)can be exponentially larger (in qubit count) than nfor complex states, unlike classical K(x)which is at most ℓ(x) +c(only if xis the program itself, not an arbitrary binary string). The upper bound is still O(2n)bits. Practical approximations therefore hinge on identifying structured subclasses of Qsol(e.g. stabiliser processes or tensor-network environments) for which both KQand the Born probabilities are efficiently approximable. 4 Quantum Solomonoff Induction (QSI) Classical Solomonoff induction assigns to every finite data string a universal a-priori probability ob- tained by summing, with complexity–based weights, over all computable environments that could have produced that string. In the quantum setting the data arriving at an agent are the classical outcomes of instrument branches executed on a quantum environment. The Quantum Solomonoff Induction formalism therefore replaces probability measures by semi-density operators and Kol- mogorov codes by quantum program states . Environment class and universal mixture .
https://arxiv.org/abs/2505.21170v1
LetQsoldenote the set of all chronological, semi- computable quantum environments. Each Q∈ Q solis specified by a program p(Q)∈ {0,1}∗ for a fixed universal QTM Uuniv. Running p(Q)produces, step by step, a sequence of CPTP maps that act on the environment register and a measurement specification for the classical tran- script. Here Qplays the role of νin the classical case. Its quantum Kolmogorov complexity is 4. QUANTUM SOLOMONOFF INDUCTION (QSI) 7 KQ(Q) := min {|p(Q)|:Uuniv(p(Q))≃Q}.For an action sequence a1:mwe write ρQ E(a1:m)∈ D(HE) for the (generally mixed) state that Qprepares on the environment register immediately before the observation at cycle m. Theuniversal semi-density operator conditional on the agent’s actions is then: ΞQ(a1:m) :=X Q∈Qsol2−KQ(Q)ρQ E(a1:m), 0<Tr ΞQ(a1:m) ≤1. (10) Projecting ΞQonto the POVM that implements the agent’s instrument at cycle myields the scalar ξQ(e1:m∥a1:m) :=Tr Me1:mΞQ(a1:m) ,which reduces to the classical Solomonoff prior ξUwhen all Me1:mcommute. Bayesian updates . Given a history h<t= (a1:t−1, e1:t−1), the posterior semi-density operator is obtained by the update followed by a renormalisation: Ξ(t) Q(a1:t−1) :=Met−1 Ξ(t−1) Q (a1:t−2) Tr[Met−1 Ξ(t−1) Q (a1:t−2) ], Ξ(0) Q:=ΞQ. (11) HereMet−1is the CP map of the realised branch of the agent’s instrument at step t−1. The distribution for the next observation is: ξQ(·∥h<t, at) =k7→Tr Eat k(Ξ(t) Q(a1:t−1)) . (12) 4.1 Convergence theorem Analogous to classical Solomonoff induction, under certain conditions we might expect QSI to exhibit convergence properties. The convergence properties of QSI are complex and remain an open question. To sketch out the issues, we consider the following model (see Appendix D for more detail). Write D(ρ∥σ) =Tr[ρ(lnρ−lnσ)]for theUmegaki relative entropy . Fix a true quantum environment Q⋆∈ Qsoland let ρ⋆ E(a1:m)be its state sequence. We assume: (C1) Ergodicity. There is a δ >0such that for every admissible action policy the time-averaged state satisfies: lim inf m→∞1 mPm k=1 ρ⋆ E(a1:k)−ρ⋆ E(a1:k−1) 1≤δ. (C2) Informational completeness. Each cycle’s instrument has a POVM refinement whose classical Fisher information matrix is full-rank up to error ϵ >0. (C3) Complexity gap finite. KQ(Q⋆)<∞andg:=P Q̸=Q⋆2− KQ(Q)−KQ(Q⋆) <∞. Theorem 1 (QSI convergence). Under(C1)–(C3)the posterior density operator satisfies EQ⋆ D ρ⋆ E(a1:t) Ξ(t) Q(a1:t) ≤KQ(Q⋆) ln 2 + ln(1 + g) t. (13) Consequently, by the quantum Pinsker inequality [17,32] (total variation distance given by KL- divergence), EQ⋆1 2 ρ⋆ E(a1:t)−Ξ(t) Q(a1:t) 1 =O t−1/2 . (14) Sketch of proof . Proving QSI convergence is an open question. One potential avenue is as follows. Define likelihood operators ΛQ t:=Met◦ ··· ◦ M e1 ρQ E(a1:t) .Monotonicity of quantum relative 8 E. Perrier entropy (the data–processing inequality; see [56]) implies that applying any branch map Mekcan onlydecrease divergence: D ρ⋆ E∥ΞQ −D Λ⋆ k∥ΛΞ k isEQ⋆-martingale. (15) Condition (C2) guarantees that each step’s expected drop is non-negative. Summing these expected drops from k= 1up to k=ttherefore shows that the averagedivergence after tcycles is bounded by the initial one, EQ⋆h D ρ⋆ E∥ΞQi ≤KQ(Q⋆) ln 2 + ln(1 + g) t. Finally, the quantum Pinsker inequality [17] turns this 1/tbound on relative entropy into an O(t−1/2)bound on trace distance, completing the convergence claim. The replacement of
https://arxiv.org/abs/2505.21170v1
full in- formational completeness by ϵ-completeness introduces an extra O(ϵ)term in the Pinsker bound, which vanishes as ϵ→0. Equation (11) is executed by the internal belief-revision QTQ channel Uint. The posterior Ξ(t) Qsupplies the conditional expectation required for the quantum Bellman equa- tion (9). QSI is the inductive basis of QAIXI concentrating weight on environments that remain compatiblewithclassicalmeasurements.Inthecommutinglimitthetheoremreducestotheclassical Solomonoff convergence bound [33]. A formal proof would need to justify martingale convergence for operator-valued universal mixtures under quantum measurement dynamics (App. D). 4.2 QSI Limitations Aside from the computational complexity considerations of QSI (which we postpone for future work), several other significant challenges arise with QSI: 1.Specifying QsolandKQ(·). A universal class of computable quantum environments must be fixed, e.g. the set of all chronological, semi–computable QTMs that output instrument sequences. The quantum description-length KQ(Q)is then the length of the shortest qubit-string that drives a fixed universal QTM to approximate the environment Qwithin trace-distance ε. Proving universality and invariance up to an additive constant—standard for classical Kolmogorov com- plexity—is technically subtler in the quantum setting because programs may themselves be in superposition. Moreover, if an environment is weakly entangled, this may ultimately mean a classical AIXI can effectively approximate QAIXI’s value function. 2.Measurement back-action . Born probabilities PrQ(e1:m∥a1:m)are computed from the iterated CPTP maps that model the interaction loop (Sec. 3.2). Each measurement branch both yields the classical outcome otandupdatesthe post-measurement state ρ(t) E7→ρ(t+1) E(ot). Because measurementalters ρE,thequantumback-actionmakestheexpectationintheBellmanequation dependent on the sequence of quantum operations and classical outcomes in a complex, non- Markovian way, so the likelihood of future data depends on the entire history h<tat. 3.Non-localityand contextuality .When Qpreparesentangledstates,thejointdistribution PrQ(e1:m∥a1:m) can violate Bell inequalities [6] and exhibit contextual dependence on the full instrument se- quence. QSI must therefore aggregate over models that are notrepresentable by any classical hidden-variable process, which complicates identifiability and convergence analyses. 4.Incomputability and resource sensitivity of the prior .Likeitsclassicalcounterpart, KQisuncom- putable. Moreover, the weight 2−KQ(Q)does not penalise the physical resources needed either to prepare the initial state of Qor to simulate its dynamics—tasks that can be exponential-time or generally infeasible. For physically realistic prediction one may replace KQby a resource-aware complexity measure that incorporates, for example, circuit depth or Trotter step count. 4. QUANTUM SOLOMONOFF INDUCTION (QSI) 9 5.Quantum error correction (QEC) . Implementing QAIXI on real, noisy hardware requires quan- tum error correction. Encoding each logical qubit in hundreds-to-thousands of physical qubits both (i) inflates the effective description-length, shifting the universal prior toward far simpler environments, and (ii) introduces continual syndrome-measurement back-action whose residual logical noise is only bounded below a code-specific threshold—together, these effects raise the resource bar and weaken the assumptions (ergodicity and informational completeness) behind potential convergence guarantees. Moreover, there are specific challenges to QAIXI and QSI raised by quantum foundational issues. Bell non-locality . In any QAIXI environment that distributes entangled subsystems to space- like separated agent components the joint percept distribution can violate the CHSH inequality. No classical hidden-variable environment νcan reproduce those statistics. A classical ξUassigns zero mass to each Bell-violating local hypothesis, so only the quantum mixture ξQhas non-trivial mass to
https://arxiv.org/abs/2505.21170v1
contribute. When the agent exploits Bell-type correlations for decision-making, its percepts are no longer conditionally-independent given the entire history, breaking the martingale structure assumed in Theorem 1. See Appendix F. Kochen–Specker contextuality . If the true environment prepares a KS set of projectors, then there exists nohistory-independent map v:P(HE)→{0,1}assigning pre-existing outcomes to ev- ery measurement the agent may perform. QSI must therefore sum over hypotheses whose outcome statistics depend on the entire future instrument sequence up to horizon m. Bayesian updates must track non-commuting observables (see Appendix E.1).. Theorem 2. Let{P1, . . . , P n} ⊂ P (H)be a Kochen–Specker uncolourable set whose projectors sum to the identity operator on H, i.e. there exists nomapv:P(H)→{0,1}satisfyingPk i=1v(Πi) = 1 for every projector decompositionPk i=1Πi=⊮that contains only elements of the set. Let Ibe any instrument whose Kraus operators are polynomials in the Pjand whose action is confined to the support H. Thennoquantum Turing machine Qcan output, for every action sequence a1:man agent might perform, a commuting family of projectors {Qa1:m(e1:m)}such that ∀a1:m,∀e1:m∈Γm: Pr env(e1:m∥a1:m) =Tr Qa1:m(e1:m)ρ⋆ E , (16) withρ⋆ Ethe true environment state prepared before cycle 1. Proof.Fixoneaction string a† 1:mthat instructs the agent to measure, at the final cycle, every projector in the KS-set. Because the QTM outputs commuting projectors Qa† 1:m(e1:m) , for each jthere is a classical random variable Xj:=vQ(Pj) :=e1:m(Pj)∈ {0,1},namely the indicator that the outcome sequence has eigenvalue 1forPj. Equation (16) asserts that these random variables reproduce the Born probabilities of the true state, hence the map vQ:Pj7−→Xj(ω)isnon- contextual : it assigns 0or1toPjwithout reference to the context in which Pjis measured . By constructionP j∈CPj=Ifor every context C ⊂ { 1, . . . , n }, whenceP j∈CvQ(Pj) = 1holds almost surely. Thus vQis aglobaltransformation of the projector into {0,1}, contradicting KS uncolourability. Therefore a perfect, non-disturbing predictor Qcannot exist. Contextuality problematises the straightforward analogy with classical history in AIXI albeit sub- ject to the extent to which environments actually do manifest uncolourable equivalents. Because 10 E. Perrier a universal classical history h<tcannot determine simultaneously the outcomes of all future in- struments, any Bayesian update rule that conditions the QSI posterior on h<taloneis necessarily incomplete;theposteriormustberefinedbythe entire future instrument schedule .Consequentlythe martingale proof of Theorem 1 requires an adapted filtration that records measurement contexts. No-cloning. Evaluating the quantum likelihood PrQ(e1:m∥a1:m)requiresfreshcopies of the pre- measurement state ρQ E(a1:m), but the no-cloning theorem forbids their duplication from a single run. Hence each Bayesian update consumes the very evidence it needs for validation; the sample complexity of learning scales with the number of distinct instruments explored, whereas in the classical case the same trajectory can be replayed arbitrarily often. Two consequences of the above are that: (a) convergence proofs must be context-sensitive ; a single universal mixture cannot assign fixed probabilities to all quantum experiments simultaneously; and (b) even under ideal identifia- bility the learning rate is limited by state-preparation resources giving rise to sample complexity consequences for any QAIXI agent. See Appendix F.2. 5 Conclusion and Open Questions Wehavelaidoutananalysisofandquantuminformationprocessing-basedmathematicalframework for Quantum AIXI, a universal intelligent agent designed to
https://arxiv.org/abs/2505.21170v1
operate within a quantum mechanical universe. We have shown, using our channel and register-based model of agent/environment inter- action in the quantum setting, how quantum analogues of universal intelligence components can be constructed. However there are significant limitations to QAIXI. Firstly, it is nowhere near being a practical proposal: QEC requires millions of physical qubits for even modest logical qubit counts; maintaining extended coherence for QAIXI decision cycles remains far beyond existing technology. Theoretically, QAIXI relies on quantum Kolmogorov complexity for its universal prior and quantum Turing machines to model environments. Key challenges include the incomputability of quantum Kolmogorov complexity, the exponential resources often required for quantum state preparation and simulation of quantum dynamics (when performed classically), complexity of calculating prob- abilities. The quantum Bellman equation involves expectations over quantum measurements and evolutions that are contextual. Moreover, exactly when QAIXI would provide a quantum advantage is an open question (see App. B). As interest in both computational agency and quantum infor- mation processing grows, understanding the capabilities and constraints of integrating quantum technologies into AGI systems will become of increasing focus. Further research directions include: 1.Computable Approximations : Can meaningful, computable (or efficiently quantum-computable) approximations to QAIXI be developed? What restrictions on the class of quantum environ- ments MQ,solor the definition of KQwould make this possible? 2.Convergence Rates and Bounds : Rigorously establishing convergence theorems for QSI and deriving tight bounds on convergence rates, taking into account quantum information-theoretic limits (e.g., Holevo information, quantum data processing inequalities), in light of learnability and tomography [1] e.g. How does quantum measurement back-action affect learning speed? 3.Quantum Resources : How do quantum resources like entanglement, if available to the agent for its internal processing or for interacting with the environment, affect the performance or complexity of QAIXI? 4.Impact of Quantum Interpretations : While this paper focused on the standard (Copenhagen- like) measurement formalism, how would adopting alternative interpretations (e.g., Everettian Many-Worlds, Bohmian Mechanics [12], QBism [25]) alter the definition of QAIXI’s optimality, its policy, or its perceived complexity? For example, in an Everettian setting, does QAIXI optimise expected reward across trajectories or branches [19,54]? 5. CONCLUSION AND OPEN QUESTIONS 11 References 1. Aaronson, S.: The learnability of quantum states. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 463(2088), 3089–3114 (2007) 2. Aaronson, S.: Quantum computing since Democritus. Cambridge University Press (2013) 3. Aaronson, S.: Quantum machine learning algorithms: Read the fine print. Nature Physics p. 5 (2014) 4. Aaronson,S.:Shadowtomographyofquantumstates.In:Proceedingsofthe50thannualACMSIGACT symposium on theory of computing. pp. 325–338 (2018) 5. Aaronson, S., Arkhipov, A.: The computational complexity of linear optics. In: Proceedings of the forty-third annual ACM symposium on Theory of computing. pp. 333–342 (2011) 6. Bell, J.: Speakable and Unspeakable in Quantum Mechanics. Cambridge University Press, Cambridge, 2nd edn. (2004) 7. Bennett, M.T.: The optimal choice of hypothesis is the weakest, not the shortest. In: Artificial General Intelligence. Springer Nature (2023) 8. Bennett, M.T.: Technical appendices (2024). https://doi.org/10.5281/zenodo.7641741 ,https:// github.com/ViscousLemming/Technical-Appendices 9. Bennett,M.T.,Maruyama,Y.:Theartificialscientist:Logicist,emergentist,anduniversalistapproaches to artificial general intelligence. In: Artificial General Intelligence. Springer (2022) 10. Berthiaume, A., Van Dam, W., Laplante, S.: Quantum kolmogorov complexity.
https://arxiv.org/abs/2505.21170v1
Journal of Computer and System Sciences 63(2), 201–221 (2001) 11. Biamonte,J.,Wittek,P.,Pancotti,N.,Rebentrost,P.,Wiebe,N.,Lloyd,S.:Quantummachinelearning. Nature 549(7671), 195–202 (2017) 12. Bohm, D.: A suggested interpretation of the quantum theory in terms of "hidden" variables. i and ii. Physical Review 85(2), 166–193 (1952) 13. Bostanci, J., Watrous, J.: Quantum game theory and the complexity of approximating quantum nash equilibria. Quantum 6, 882 (2022) 14. Brukner, Č.: Quantum causality. Nature Physics 10(4), 259–263 (2014) 15. Catt, E., Hutter, M.: A gentle introduction to quantum computing algorithms with applications to universal prediction (2020), https://arxiv.org/abs/2005.03137 16. Cerezo, M., Verdon, G., Huang, H.Y., Cincio, L., Coles, P.J.: Challenges and opportunities in quantum machine learning. Nature Computational Science (2022). https://doi.org/10.1038/ s43588-022-00311-3 17. Csiszár,I.,Körner,J.:Informationtheory:codingtheoremsfordiscretememorylesssystems.Cambridge University Press (2011) 18. Deutsch, D.: Quantum theory, the church-turing principle and the universal quantum computer. Pro- ceedings of the Royal Society of London. A. Mathematical and Physical Sciences 400(1818), 97–117 (1985) 19. Deutsch, D.: Quantum theory of probability and decisions. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences 455(1988), 3129–3137 (1999) 20. Dong, D., Chen, C., Chen, H., Tarn, T.J.: Quantum reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 38(5), 1207–1220 (2008) 21. Dunjko, V., Briegel, H.J.: Machine learning & artificial intelligence in the quantum domain: a review of recent progress. Reports on Progress in Physics 81(7), 074001 (2018) 22. Ebtekar, A., Hutter, M.: Foundations of algorithmic thermodynamics. Physical Review E 111(1), 014118 (2025) 23. Fallenstein, B., Soares, N., Taylor, J.: Reflective variants of solomonoff induction and aixi. In: Interna- tional Conference on Artificial General Intelligence. pp. 60–69. Springer (2015) 24. Fang, K., Fawzi, O., Renner, R., Sutter, D.: Chain rule for the quantum relative entropy. Physical review letters 124(10), 100501 (2020) 25. Fuchs, C.A., Mermin, N.D., Schack, R.: An introduction to qbism with an application to the locality of quantum mechanics. American Journal of Physics 82(8), 749–754 (2014) 12 E. Perrier 26. Goertzel, B.: The general theory of general intelligence: A pragmatic patternist perspective. Tech. rep., Singularity Net (2021) 27. Goertzel, B., et al.: Opencog hyperon: A framework for agi at the human level and beyond. Tech. rep., OpenCog (2023) 28. Grünwald, P.D., Vitányi, P., et al.: Algorithmic information theory. Handbook of the Philosophy of Information pp. 281–320 (2008) 29. Guo, H., Zhang, J., Koehler, G.J.: A survey of quantum games. Decision Support Systems 46(1), 318–332 (2008) 30. Gutoski,G.,Watrous,J.:Towardageneraltheoryofquantumgames.In:Proceedingsofthethirty-ninth annual ACM symposium on Theory of computing. pp. 565–574 (2007) 31. Haapasalo, E.: The choi–jamiołkowski isomorphism and covariant quantum channels. Quantum Studies: Mathematics and Foundations 8(3), 351–373 (2021) 32. Hirota, O.: Application of quantum pinsker inequality to quantum communications. arXiv preprint arXiv:2005.04553 (2020) 33. Hutter, M.: Universal artificial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media (2004) 34. Hutter, M.: Universal Algorithmic Intelligence: A Mathematical Top →Down Approach, pp. 227–290. Springer Berlin Heidelberg, Berlin, Heidelberg (2007) 35. Hutter, M.: Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Springer, Heidelberg (2010) 36. Jerbi, S., Gyurik, C., Marshall, S., Briegel, H., Dunjko, V.: Parametrized quantum policies for re- inforcement learning. Advances in Neural Information Processing
https://arxiv.org/abs/2505.21170v1
Systems 34, 28362–28375 (2021). https://doi.org/10.5281/zenodo.5833370 37. Knill, E.: Approximation by quantum circuits. arXiv preprint quant-ph/9508006 (1995) 38. Kochen, S., Specker, E.P.: The problem of hidden variables in quantum mechanics. Journal of Mathe- matics and Mechanics 17(1), 59–87 (1967) 39. Leike, J., Hutter, M.: Bad universal priors and notions of optimality. COLT (2015) 40. Liu, Y., Arunachalam, S., Temme, K.: A rigorous and robust quantum speed-up in supervised machine learning. Nature Physics pp. 1–5 (2021). https://doi.org/10.1038/s41567-021-01287-z 41. McMillen, P., Levin, M.: Collective intelligence: A unifying concept for integrating biology across scales and substrates. Communications Biology 7(1), 378 (Mar 2024) 42. Meyer, N., Ufrecht, C., Periyasamy, M., Scherer, D.D., Plinge, A., Mutschler, C.: A survey on quantum reinforcement learning. arXiv preprint arXiv:2211.03464 (2022) 43. Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information. Cambridge University Press, 10th anniversary edn. (2010) 44. Özkural, E.: Ultimate intelligence part ii: Physical measure and complexity of intelligence. arXiv preprint arXiv:1504.03303 (2015) 45. Perrier, E.: Quantum Geometric Machine Learning. arXiv preprint arXiv:2409.04955 (2024) 46. Schuld, M., Bergholm, V., Gogolin, C., Izaac, J., Killoran, N.: Evaluating analytic gradients on quan- tum hardware. Physical Review A 99(3) (mar 2019). https://doi.org/10.1103/physreva.99.032331 , https://doi.org/10.1103%2Fphysreva.99.032331 47. Schuld, M., Petruccione, F.: Introduction, p. 1–19. Quantum Science and Technology, Springer Inter- national Publishing (2018). https://doi.org/10.1007/978-3-319-96424-9_1 ,https://doi.org/10. 1007/978-3-319-96424-9_1 48. Schuld, M., Petruccione, F.: Machine Learning with Quantum Computers. Springer (2021) 49. Soares, N., Fallenstein, B.: Two attempts to formalize counterpossible reasoning in deterministic set- tings. In: Artificial General Intelligence: 8th International Conference, AGI 2015, AGI 2015, Berlin, Germany, July 22-25, 2015, Proceedings 8. pp. 156–165. Springer (2015) 50. Solomonoff, R.J.: A Formal Theory of Inductive Inference. Part I & II. Information and Control 7(1–2), 1–22, 224–254 (1964). https://doi.org/10.1016/S0019-9958(64)90223-2 5. CONCLUSION AND OPEN QUESTIONS 13 51. Solé, R., Moses, M., Forrest, S.: Liquid brains, solid brains. Philosophical Transactions of the Royal Society B: Biological Sciences 374(1774), 20190040 (2019) 52. Thórisson, K.R., Bieger, J., Thorarensen, T., Sigurðardóttir, J.S., Steunebrink, B.R.: Why artificial intelligence needs a task theory: and what it might look like. In: Artificial General Intelligence: 9th International Conference, AGI 2016, New York, NY, USA, July 16-19, 2016, Proceedings 9. pp. 118– 128. Springer (2016) 53. Vitányi, P.M.: Quantum kolmogorov complexity based on classical descriptions. IEEE Transactions on Information Theory 47(6), 2464–2479 (2002) 54. Wallace, D.: The Emergent Multiverse: Quantum Theory according to the Everett Interpretation. Ox- ford University Press (2012) 55. Wang,P.,Hammer,P.:Assumptionsofdecision-makingmodelsinagi.In:ArtificialGeneralIntelligence: 8th International Conference, AGI 2015, AGI 2015, Berlin, Germany, July 22-25, 2015, Proceedings 8. pp. 197–207. Springer (2015) 56. Watrous, J.: The Theory of Quantum Information. Cambridge University Press (2018) 57. Wiseman, H., Cavalcanti, E.: Causarum investigatio and the two Bell’s theorems. Half a Century of Bell’s Theorem, Quantum [Un]Speakables II (2017) 14 E. Perrier A Entanglement in the Interaction Loop In Section 3.2, the post-measurement joint state was written under the separability assumption ρ(t−1) AE =ρ(t−1) A⊗ρ(t−1) E, leading to ρ(t) AE=ρ(t−1) A⊗ρ(t) E′. When the agent and environment registers are initially entangled this factorisation is instead as follows. The outcome probability is given by: Pr(ot=k|at, ρ(t−1) AE) = Trh idA⊗Eat k ρ(t−1) AEi
https://arxiv.org/abs/2505.21170v1
with conditional post-measurement state: ρ(t) AE=(idA⊗Eat k) ρ(t−1) AE Pr(k) and reduced environmental state: ρ(t) E′= Tr Aρ(t) AE=Eat k TrAρ(t−1) AE Pr(k). Only when ρ(t−1) AEis separable (or when the instrument completely decoheres the measured sub- system) does the joint state factorise into ρ(t−1) A⊗ρ(t) E′. The subsequent internal QAIXI update Uint:ρ(t−1) A7→ρ(t) Anow acts on ρ(t) A= Tr Eρ(t) AE,which already incorporates any entanglement- induced disturbance. This general form of interaction loop is valid for all pre-measurement states, entangled or not, and serves as the reference implementation for any rigorous analysis of QAIXI in the fully quantum regime. B Quantum Advantage for QAIXI A fundamental question for any quantum extension of classical AIXI is: under what circumstances does quantum processing provide genuine computational advantages? We examine this question below in the case of specific environment classes where classical simulation becomes intractable. B.1 Boson Sampling Environments An common reference for assessments of quantum computational advantage over classical proto- cols is that of boson sampling [5], a non-universal photonic quantum protocol that injects indis- tinguishable single photons into a fixed linear interferometer and samples output patterns whose probabilities are governed by matrix permanents which is considered exponentially hard for classical computers. Let Bndenote the class of quantum environments that embed boson sampling processes as follows. Each environment Q∈ Bnmaintains an n-mode optical interferometer described by a random unitary matrix U∈U(n)drawn from the Haar measure. On each cycle, Qaccepts a clas- sical action atspecifying input photon configuration. It then evolves the state via U, and outputs measurement outcomes from a fixed detection scheme. Suppose there exists a family {Qn} ⊂ B n such that (a) generating one percept from Qnrequires poly( n)quantum operations; and (b) com- puting the output distribution to total variation distance <2−nis#P-hard. In such a case, any classical AIXI with total running time fails to approximate the optimal value unless the polynomial hierarchy collapses. C. INFORMATION-THEORETIC LIMITS ON QAIXI 15 ThiscanbeunderstoodbyconsideringhowAIXI’svaluefunctionrequiresaccurateestimationofen- vironment transition probabilities. For Qn∈ Bn, this reduces to sampling from the boson sampling distribution.Theoriginalbosonsamplingconjectureinvolvesclassicalsimulationwhich,tobewithin the required accuracy, would yield an algorithm for computing permanents of Gaussian random ma- trices—a problem conjectured to be #P-complete which would collapse the hierarchy, contradicting standard complexity-theoretic assumptions. In terms of value functions, let ˆVclassicaldenote the value function computed by any polynomial-time classical agent. If |ˆVclassical−V∗ Qn|<1/poly( n), then the agent must distinguish between boson sampling distributions and uniform distributions over the same support—a task equivalent to permanent estimation within exponential precision. In this way, boson sampling provides an in principle litmus test for quantum advantage, albeit with limited applicability to specific tasks AIXI or QAIXI would undertake. The boson sampling example illustrates how QAIXI’s computational advantage over classical AIXI is contingent on environments involving problems which are classically hard. B.2 Structured Environment Classes The example above raises the obvious question of when do quantum environments admit efficient classical approximation? We draw upon results in quantum information to illustrate when this may be true in the case of weakly entangled systems. In quantum formalism, an environment Qis called aMatrix Product Environment (MPE) with
https://arxiv.org/abs/2505.21170v1
bond dimension χif its state evolution can be represented as: ρ(t) E= Tr aux"nO i=1A[t] i(χ)# (17) where each A[t] i(χ)is aχ×χmatrix depending on the history up to time t. The question then becomes how classically learnable is an MPE? Let Eχdenote the class of all MPEs with bond dimension χ=O(1). Then there exists a classical polynomial-time agent that ε-approximates QAIXI’svaluefunctiononany Q∈ Eχwithsamplecomplexity O(poly( n, ε−1)).Thebonddimension of an MPE is the size of tensors used to represent the MPE and is a figure that relates to the complexity of the representation and the extent of its entanglement. The key observation is that MPEswithconstant bond dimension admitefficient tomographyusing classicalshadowtomography [4] which can, in certain cases, reduce the observables required to calculate the expectation values of exponentially many observables of an unknown quantum state to only logarithmic number of randomised measurements of that state. C Information-Theoretic Limits on QAIXI Information-theoretic limits upon quantum information processing and measurement [56] that are well-studied in QIP also have consequences for any QAIXI proposal. For example, the no-cloning theorem discussed above imposes fundamental limits on QAIXI’s learning efficiency that have no classical analog. Since QAIXI observes each environment state only once per trajectory, convergence rates for learning the current environment state precisely are limited. For quantum environments with parameter vector θ∈Θ⊂Rd, the quantum Fisher information matrix provides fundamental 16 E. Perrier bounds on parameter estimation. For example, for a parametric quantum state ρ(θ), the quantum Fisher information matrix has elements: [IQ(θ)]jk=1 2Tr [ρ(θ){∂jL(θ), ∂kL(θ)}] (18) where L(θ)is the symmetric logarithmic derivative. Moreover, the quantum Cramér-Rao bound [56] (a bound on optimal precision for estimating parameters from quantum measurements) means that any QAIXI agent estimating environment parameters θfrom Tmeasurement outcomes satisfies: E[(ˆθ−θ)(ˆθ−θ)T]≥1 TIQ(θ)−1(19) This bound is generally tighter than the classical Fisher information bound, reflecting the additional constraints imposed by quantum measurement. QAIXI’s potential advantage would likely emerge in environments where this structure aligns with quantum computational advantages. D QSI Detail and Limitations Here we discuss conditions on QSI (Theorem 1) in particular some of the challenges facing proving convergence. D.1 Convergence Conditions Priortomeasurement,eachstateisgivenaweightproportionaltoitsdescriptionlength.Recallfrom Theorem 1 that under assumptions (C1)–(C3), the posterior semi-density operator Ξ(t) Q(Eq. (11)) satisfies Eq. 13: EQ⋆ D ρ⋆ E(a1:t) Ξ(t) Q(a1:t) =Trh ρ⋆ E(a1:t) lnρ⋆ E(a1:t)−lnΞ(t) Q(a1:t)i ≤KQ(Q⋆) ln 2 + ln(1 + g) t. (20) To show this, recall that each density operator ρQ Eis associated with a quantum state Qwhere Q∗ denotes the true environmental state. Ξ(t) Qdescribes the agent’s best assessment as to the true state of the environment ρ∗ Eat time t(we leave the time dependence of ρunderstood). The QAIXI belief state is a weighting using such descriptions over each possible state density operator: ΞQ=ωQ∗ρ∗ E+X Q̸=Q∗ωQρQ E. TheωQterms reflect the descriptions of state Q. The shortest description of Qis given by KQ(Q) where simpler descriptions get more weight via ωQ= 2−KQ(Q). In classical minimum description length (MDL) formalism, the code length of a data sequence is a combination of the model com- plexity (description length) and likelihood of data under the chosen model (a dilution term). The D ρ⋆
https://arxiv.org/abs/2505.21170v1
E(a1:t) Ξ(t) Q(a1:t) term denotes the relative entropy and is reflective of the additional mea- surements required to minimise the estimate between Ξandρ∗ E. To see this, define: Z=ωQ∗+X Q̸=Q∗ωQ=ωQ∗(1 +g) (21) D. QSI DETAIL AND LIMITATIONS 17 where: g=X Q̸=Q∗ωQ ωQ∗=X Q̸=Q∗2−(KQ(Q)−KQ(Q∗))(22) ΞQis not yet normalised and ρEremains a semi-density operator as Tr ρE≤1i.e. as with the classical case Zis sub-normalised e.g. Z=P QωQ≤1for prefix-free codes with binary word- lengths (see [28]). To apply the martingale and chain-rule identities we require density (not semi- density) operators. To do so, we note the correct model’s share of the total description mass (the effective prior mass of the true environment inside the normalised mixture) is: ωQ∗ Z=1 1 +g=cmax Z=ωQ∗(1 +g) The normalised prior at time t= 0is then given by: ΞQ=Ξ(0) Q Z=ρ∗ E (1 +g)(23) The form of the bound D0depends on our choice of weighting cforρ∗ Ewhere 0< c≤cmax. Choosing c=cmaxgives: Ξ(0) Q=cρ∗ E=1 1 +gρ∗ E (24) with relative entropy given by: D0=Tr[ρE(lnρ∗ E−lnΞQ)] ≤Tr[ρE(lnρ∗ E−(lnρ∗ E−ln(1 + g)))] = ln(1 + g) assuming Tr (ρQ E) = 1for all states Q. This form includes only the dilution term. In the event of perfect information (so g= 0) it reduces to the complexity KQ(Q∗) ln 2. To assume the more familiar MDL form (which includes both complexity and dilution terms) we can scale c(while remaining within the bounds above) e.g. c=ω2 Q∗/Z=ωQ∗/(1 +g). Doing so gives: D0=Tr[ρE(lnρ∗ E−lnΞQ)] (25) ≤Tr[ρE(lnρ∗ E−(lnωQ∗−ln(1 + g) + ln ρ∗ E)] (26) =KQ(Q∗) ln 2|{z} model complexity+ ln(1 + g)|{z} dilution(27) At each cycle s(action and observation), the agent’s state update is a CPTP map Ms:ρ7→ρ(a1:s) such that: Ξ(s) Q=Ms Ξ(s−1) Q TrMs Ξ(s−1) Qand ρ⋆ E(a1:s) =Ms ρ⋆ E(a1:s−1) TrMs ρ⋆ E(a1:s−1). (28) 18 E. Perrier Using the chain rule for quantum relative entropy [24] we obtain: Ds−1=E Q⋆ D ρ⋆ E(a1:s−1)∥Ξ(s−1) Q =E Q⋆s−1X k=1D ρ⋆ E(ak|a1:k−1) Ξ(k) Q(ak|a1:k−1) .(29) Applying (29) with s=t+ 1and noting that each conditional divergence is non-negative gives E Q⋆D ρ⋆ E(a1:t)∥Ξ(t) Q(a1:t) ≤D0. (3) Combining 27 and (3) and divide by t: EQ⋆D ρ⋆ E(a1:t)∥Ξ(t) Q(a1:t) ≤KQ(Q⋆) ln 2 + ln(1 + g) t(30) to obtain Eq. 13. In our sketch above, the transition from the relative entropy convergence to a con- vergence rate for distinguishability is achieved using the quantum Pinsker inequality, quantifying how the QAIXI agent’s belief Ξ(t) Q(a1:t)converges to the true state ρ⋆ E(a1:t). In an information- theoretic sense,1 2∥ρ−σ∥1represents the maximum probability with which one can distinguish between two quantum states ρandσusing an optimal measurement. The quantum Pinsker in- equality (also known as the Csiszár–Kullback–Pinsker inequality for quantum states) provides that for any two density operators ρandσ: 1 2∥ρ−σ∥1≤q 1 2D(ρ∥σ). We use the property that for non-negative random variables, if A≤B, then E[A]≤E[B]and Jensen’s inequality such that E[√ X]≤p E[X]such that: EQ⋆1 2∥ρ−σ∥1 ≤q 1 2EQ⋆ D(ρ∥σ) . (31) Substituting into equation 31 with ρ=ρ⋆ E(a1:t)andσ=Ξ(t) Q(a1:t): EQ⋆h 1 2 ρ⋆ E(a1:t)−Ξ(t) Q(a1:t) 1i ≤r 1 2EQ⋆h D ρ⋆ E(a1:t) Ξ(t) Q(a1:t)i ≤r 1 2·KQ(Q⋆) ln 2 + ln(1 +
https://arxiv.org/abs/2505.21170v1
g) t =r KQ(Q⋆) ln 2 + ln(1 + g) 2t = r KQ(Q⋆) ln 2 + ln(1 + g) 2! t−1/2. Hence : EQ⋆1 2 ρ⋆ E(a1:t)−Ξ(t) Q(a1:t) 1 =O t−1/2 . (32) The significance of Eq. (32) is that the O(t−1)convergence rate for the expected relative entropy implies an O(t−1/2)convergence rate for the expected trace distance. Thus any measurement strat- egy can separate the agent’s belief from the true state only with probability vanishing like t−1/2. D. QSI DETAIL AND LIMITATIONS 19 This means that as the agent accumulates more data (as tincreases), its belief state Ξ(t) Q(a1:t)not only becomes a better approximation of the true state ρ⋆ E(a1:t)in terms of information-theoretic divergence, but also becomes increasingly indistinguishable from the true state via any physical measurement. The maximum probability of successfully distinguishing the agent’s belief from the true state diminishes as t−1/2. This transition from a 1/trate for a quadratic-like loss (relative entropy can be seen as a generalised squared error in the space of probability distributions or den- sity matrices) to a t−1/2rate for an L1-like distance (trace distance is the quantum analogue of total variation distance) is a common theme in statistical inference. It mirrors, for instance, aspects of the classical Bernstein–von Mises theorem where posterior distributions concentrate around the true parameter value. Importantly, this O(t−1/2)rate for distinguishability is derived without need- ing any assumptions beyond those already made for establishing the relative entropy bound (i.e., conditions (C1)–(C3) that underpin Theorem 1). D.2 Conditions on QSI Establishing the convergence of QSI is challenging because of the conditions that would need to subsist: 1.(C1) Ergodicity: There is a δ >0such that for every admissible action policy the time-averaged (with respect to each admissible instrument schedule) state of the true environment Q⋆satisfies lim inf m→∞1 mmX k=1 ρ⋆ E(a1:k)−ρ⋆ E(a1:k−1) 1≤δ. This condition implies that the environment’s state does not change too erratically on average, ensuring that past observations retain some relevance for predicting future states. 2.(C2) Informational Completeness: Each cycle’s instrument (measurement) has a POVM refine- ment whose classical Fisher information matrix is full-rank up to error ϵ >0. This means the agent’s measurements are sufficiently informative to distinguish different quantum states, which is crucial for learning. 3.(C3) Complexity Gap: The true environment Q⋆has finite quantum Kolmogorov complexity, KQ(Q⋆)<∞, and the sum of relative complexities of other environments: g:=X Q̸=Q⋆2− KQ(Q)−KQ(Q⋆) is finite. This ensures Q⋆is not infinitely complex relative to the universal QTM and has a non-vanishing initial weight in the QSI mixture, and that the collective weight of alternative hypotheses is manageable. D.3 Comparison with classical SI In our sketch above, we have adapted the typical classical approach for convergence of classical SI. 1. Likelihood Operators and Initial Divergence Bound: The QSI mixture ΞQ(a1:m)(Eq. (10)) is a semi-density operator representing the agent’s belief about the environment state before the m- th observation, given actions a1:m. The posterior Ξ(t) Qis obtained via updates based on observed 20 E. Perrier outcomes (Eq. (11)). A first step, analogous to classical Solomonoff induction, is to bound the ini- tial information deficit
https://arxiv.org/abs/2505.21170v1
of the QSI mixture with respect to the true environment Q⋆. This initial divergence, D0=D(ρ⋆ E∥Ξ(0) Q), where Ξ(0) Qis the initial QSI prior, can be bounded by the quantum Kolmogorov complexity of the true environment: D0≤KQ(Q⋆) ln 2 + ln(1 + g). (33) This bound reflects that the cost of encoding the true environment within the universal mixture is related to its complexity KQ(Q⋆), and the term ln(1 + g)accounts for the mass of alternative hypotheses. Establishing this bound rigorously for semi-density operators and Umegaki relative entropy is central to a full proof. 2. Monotonicity of Quantum Relative Entropy and Martingale Argument: The core of the conver- genceargumenthingesonthedata-processinginequalityforquantumrelativeentropy.Thisprinciple states that for any quantum operation (a completely-positive trace-preserving map, CPTP map), such as a measurement branch Mekoccurring in the Bayesian update, the relative entropy between any two states cannot increase: D(M(ρ)∥M(σ))≤D(ρ∥σ). Recall the likelihood operators ΛQ t:= Met◦···◦M e1(ρQ E(a1:t))where ρQ E(a1:t)is interpreted as the initial state on which the sequence of CPTP maps Me1, . . . ,Metact.) Eq. (15) states that D(ρ⋆ E∥ΞQ)−D(Λ⋆ k∥ΛΞ k)isEQ⋆-martingale. Specifically, if Dk=D(ρ⋆(k) E∥Ξ(k) Q)is the divergence between the true state and the posterior after kobservation-update cycles, then EQ⋆[Dk|history h<k]≤Dk−1. The divergence, on average, tends to decrease or stay the same. The decline in divergence at step k,∆k=Dk−1−Eek∼Q⋆[Dk|h<k], is therefore non-negative. As a result, the assumption C2is needed to ensure that if the poste- riorΞ(k−1) Qis not perfectly aligned with the true state ρ⋆(k−1) E. Measurement provides additional information that leads to a decrease in uncertainty (and greater weighting of the QAIXI belief Ξ towards the true state ( gdeclines while ωQ∗increases). 3. Average Divergence: By summing these non-negative expected drops over tinteraction cycles, we get: tX k=1EQ⋆,hist[∆k] =EQ⋆,hist[D0−Dt]≤D0, where D0is the initial divergence bounded as in Eq. (33), and Dt=D(ρ⋆ E(a1:t)∥Ξ(t) Q(a1:t))is the divergence at step t. This implies that the sum of positive one-step learning gains (in terms of divergence reduction) is bounded by the initial total ignorance D0. The result that: EQ⋆[Dt]≤D0 t=KQ(Q⋆) ln 2 + ln(1 + g) t. shows that the expected divergence at time titself must decrease on average This establishes that the expected relative entropy between the true environment state and the QSI posterior converges to zero at a rate of 1/t. 4. Quantum Pinsker Inequality for Trace Distance Convergence: Finally, the quantum Pinsker in- equality provides a link between the trace distance ∥ρ−σ∥1(a measure of distinguishability) and the relative entropy D(ρ∥σ): 1 2∥ρ−σ∥2 1≤D(ρ∥σ). D. QSI DETAIL AND LIMITATIONS 21 Applying this to the result from step 3: EQ⋆1 2 ρ⋆ E(a1:t)−Ξ(t) Q(a1:t) 2 1 ≤EQ⋆[Dt]≤KQ(Q⋆) ln 2 + ln(1 + g) t. Using Jensen’s inequality (or a linear version of Pinsker’s inequality), this implies convergence in trace distance: EQ⋆1 2 ρ⋆ E(a1:t)−Ξ(t) Q(a1:t) 1 =O(t−1/2). ThisshowsthattheQSIposteriorstate Ξ(t) Q(a1:t)convergestothetrueenvironmentstate ρ⋆ E(a1:t)in trace distance. The effect of ϵ-completeness in (C2), rather than perfect informational completeness, would typically introduce an additional error term in the bounds, which vanishes as ϵ→0. D.4 Limitations The proof sketch above, while following a known pattern, relies on several steps that require careful and
https://arxiv.org/abs/2505.21170v1
rigorous justification in the quantum setting: 1.Martingale Theory for Quantum Processes . Classical martingale convergence theorems are well- established for sequences of real-valued random variables with respect to a filtration. Extending these to sequences of quantum states (density operators or, as here, semi-density operators like ΞQ) that evolve via quantum operations (measurements and unitary evolutions) is a significant technical hurdle. The notion of a filtration Ftmust properly capture the information gained from sequences of quantum measurements, which is complicated by measurement back-action, the non-commutative nature of observables, and potential contextuality. 2.Relative Entropy with Semi-Density Operators . The Umegaki relative entropy D(ρ∥σ)is typi- cally defined for density operators ρandσ(where Tr ρ=Trσ= 1). We have set out how this would be adjusted (normalised) in order to render it a full density operator. An alternative approach given the QSI mixture ΞQis a semi-density operator (Tr ΞQ≤1) is for the derivation - and properties (like monotonicity under CPTP maps), and the Pinsker inequality - to be established or appropriately adapted for semi-density operators. 3.Measurement Back-Action . Each measurement Mekprovides a classical outcome ekand trans- forms the quantum state. The sequences of true states ρ⋆ E(a1:k)and posterior beliefs Ξ(k) Q(a1:k) are conditioned on the full history of actions a1:kand prior outcomes e1:k−1. The calculation of the expected drop in divergence requires careful averaging over outcomes ekgenerated by the true environment Q⋆acting on its state ρ⋆ E(a1:k−1). The ergodicity condition (C1) is important in ensuring that the learning process is meaningful over time and does not get stuck due to highly non-stationary state sequences. 4.Contextuality and Non-locality . As we discuss, if the true environment Q⋆exhibits Kochen- Specker contextuality or Bell non-locality, the classical intuition that a history uniquely deter- mines future outcome probabilities (independent of measurement choices for other compatible observables) breaks down. The requirement for an adapted, context-aware filtration complicates the structure of the conditional expectations inherent in any martingale argument e.g. standard assumptions of conditional independence of percepts, often used in classical proofs, are violated. However, the practical implications of these foundational issues may be more or less significant depending on context (see the Appendix section below for an extended discussion). 22 E. Perrier 5.Informational Completeness .Condition(C2)assertsthatmeasurementsaresufficientlyinforma- tive (Fisher information matrix is full-rank up to ϵ), the precise manner in which this guarantees a sufficient trend towards convergence at each step needs would require further analysis. For example, how (and whether) the error ϵpropagates into the final convergence bounds and rates is would be open question. 6.No-Cloning and Sample Complexity . The no cloning theorem fundamentally limits the agent’s ability to learn. A QAIXI agent cannot make multiple measurements on identical copies of a past unperturbed quantum state to refine its likelihood estimates for different actions from that state as measurement consumes the state. This underscores the online and single-shot nature of quantum learning from a trajectory. It also has consequences for the effective sample complexity of learning undertaken by any QAIXI and is a significant difference from classical scenarios where data can often be re-analysed. Note that if i.i.d. copies at negligible cost, the
https://arxiv.org/abs/2505.21170v1
no-cloning impact may be minimal. While the QSI convergence theorem sketch outlines a potentially plausible route to proving that a QAIXI-like agent can learn its quantum environment, a full proof faces unresolved theoretical and technical challenges. These primarily arise from the foundational differences between classical and quantum information processing, particularly concerning measurement, state representation, and contextuality. Addressing these issues remains a significant open research area in the theoretical foundations of quantum artificial general intelligence. E. FOUNDATIONAL IMPLICATIONS FOR QAIXI 23 E Foundational Implications for QAIXI The Kochen-Specker (KS) theorem [38] is a cornerstone of quantum foundations, reflecting central differences between quantum and classical descriptions of reality. It states that for quantum systems of dimension three or higher, it is impossible to assign definite, pre-existing values to all possible measurements (observables) in a way that is independent of the context of measurement (i.e., the orderofmeasurementoperationsandwhichothercompatibleobservablesaremeasuredalongsideit). Here we elaborate in more detail on the implications of KS contextuality for the QAIXI framework, particularly concerning Theorem 2. E.1 Contextuality and QAIXI’s Universal Mixture Classical AIXI operates under the assumption that an environment νhas definite properties that can be observed. The history h<tis a sequence of such definite observations. However, if the true quantum environment Q⋆leads to states whose measurements exhibit contextuality, then QAIXI’s understanding of Q∗will differ from the classical case. Recall Theorem 2 provides that no quantum Turing machine Q(representing a predictive model for the environment) can output a commuting family of projectors {Qa1:m(e1:m)}that perfectly predicts outcomes for a KS-uncolourable set of measurements (Eq. (16)). The consequence for QAIXI’s belief state, ΞQ(a1:m)(defined in Eq. (10)), issignificant.QAIXI’sbeliefisamixturethatsumsoverallsemi-computablequantumenvironments Q∈ Qsol. IfQ⋆is KS-contextual, then any individual Qwithin the sum that attempts to model Q⋆ using classical, non-contextual hidden variables (or by outputting commuting projectors that aim to assign definite values independent of measurement choices) will fail to accurately reproduce the statistics of Q⋆. Thus, for ΞQto converge to a description of Q⋆, it must implicitly give dominant weight to models Qthat are themselves contextual or whose predictive mechanisms do not rely on non-contextual value assignments. E.2 Contextuality Consequences for Learning In classical AIXI, the history h<tis a sequence of action-percept pairs, where percepts are assumed to be objective records of definite environmental properties via measurement. For QAIXI situated within a contextual quantum environment, the situation is more nuanced. The outcome otof a measurement action atmay not be interpretable as revealing a pre-existing property Pjof the system. Instead, otis realised only upon measurement. Intuitively, its probability (and even its meaning) can depend on the complete set of compatible observables measured in action at. To see this, let Q⋆be an environment that prepares a qutrit ( d= 3) system. Suppose QAIXI can choose actions atthat correspond to measuring sets of compatible projectors from a KS set (e.g., a set of projectors for a qutrit as in the Peres-Mermin square, or simpler KS sets). Let P1, P2, . . . , P nbe such a KS set. –If QAIXI performs action a1measuring the context C1={Pi, Pj, Pk}(where Pi+Pj+Pk=I), it obtains outcomes (oi, oj, ok). –Ifitlater(orinacounterfactualscenario)performsaction a2measuringcontext C2={Pi, Pl, Pm} (where
https://arxiv.org/abs/2505.21170v1
Pi+Pl+Pm=I), it obtains outcomes (o′ i, o′ l, o′ m). The KS theorem implies that QAIXI cannot learn a universal value assignment v(Px)∈ {0,1}such thatoi=v(Pi)ando′ i=v(Pi)consistently across all contexts for all projectors in the KS set, while 24 E. Perrier also satisfyingP Px∈Cv(Px) = 1for all contexts C. This means that the Bayesian update rule for QAIXI (Eq. (11)), Ξ(t) Q(a1:t−1) :=Met−1 Ξ(t−1) Q(a1:t−2) ..., must process the information et−1(which includes ot−1) in a way that acknowledges its contextual nature. The meaning of ot−1for updating beliefsabout Q⋆istiedtothefullinstrument Met−1andpotentiallythesetofallmeasurementsthat constituted action at−1. This is why the posterior must be refined by the entire future instrument schedule (or at least the current one) if one aims for precise predictions in contextual scenarios. A simple classical history string h<tis insufficient. E.3 Convergence Implications TheconditionsfortheconvergenceofQSIassketchedinTheorem1,reliesonamartingaleargument. Martingales are defined with respect to a filtration, which represents the information accumulated over time. If measurement outcomes are contextual, the structure of this filtration becomes more complex. The information gained from an outcome otis not just about the specific observable mea- sured but also about the context. This is why the martingale assumptions of Theorem 1 require an adapted filtration that records measurement contexts - because standard conditional independence assumptions, often implicit in simpler martingale proofs, may not hold if the probability of future outcomes depends on the context of past measurements in non-trivial ways. The ergodicity (C1) and informational completeness (C2) conditions would also need to be interpreted in light of contextu- ality. In essence, KS contextuality reflects the differences between classical and quantum ontology - and classical and quantum models of computation. Its consequence is that any QAIXI would need to adopt a more sophisticated model of reality than classical AIXI. It cannot assume that the environment possesses a set of definite, non-contextual properties that are merely uncovered by measurement. Instead, QAIXI must learn and operate in a world where measurement outcomes are co-created by the interaction between the agent’s choice of measurement context and the quantum system. F Bell Non-Locality and No-Cloning Beyond contextuality, other foundational quantum principles like Bell non-locality and the no- cloning theorem impose significant constraints and offer unique characteristics to QAIXI compared to its classical counterpart. We expand upon the dicussion above in the subsections below. F.1 Bell Non-Locality Bell’s theorem [6] demonstrates that quantum mechanics predicts correlations between spatially separated systems (that were previously entangled) which cannot be explained by any theory based on local hidden variables (LHVs). The implications for QAIXI are indicative of the differences between quantum and classical environments. If the true environment Q⋆subsists in entangled states and allowing QAIXI to perform measurements on these subsystems at space-like separation, then QAIXI’s universal mixture ΞQ(Eq. (10)) must accommodate models Qthat are inherently non-local: 1. Classical AIXI, relying on Solomonoff induction over classical Turing machines Msol, constructs its universal prior ξU(Eq. (2)) from environments that are, by their classical nature, local. Such a prior would assign zero probability to observing correlations that violate Bell inequalities (e.g., the CHSH inequality). F. BELL NON-LOCALITY AND NO-CLONING 25 2. QAIXI, by summing over quantum
https://arxiv.org/abs/2505.21170v1
environments Qsol, can, in principle, learn and adapt to a non-local Q⋆. The term 2−KQ(Q)ρQ E(a1:m)must include Qs whose ρQ E(a1:m)can lead to Bell- violating statistics upon appropriate measurements. For example, consider a QAIXI agent designed with two components, Alice and Bob, who are spatially separated. (a) The environment Q⋆repeatedly sends Alice and Bob a pair of qubits in an entangled Bell state, e.g.,|Ψ−⟩=1√ 2(|01⟩ − |10⟩). (b) At each cycle t, Alice receives a classical random bit x∈ {0,1}and Bob receives y∈ {0,1} (these could be part of the agent’s internal state or from an external prompter, effectively part of the action setup). (c) Alice chooses a measurement setting Ax(e.g., measuring her qubit along one of two directions) and obtains outcome oA∈ {+1,−1}. Bob similarly chooses Byand gets oB∈ {+1,−1}. (d) The action for QAIXI could be at= (setting choice for Ax,setting choice for By), and the per- ceptet= (oA, oB, rt). The reward rtcould be 1ifoA·oB= (−1)xy(CHSH game condition for certain settings) and 0otherwise. IfQ⋆is quantum, Alice and Bob can choose their measurement settings such that they win the CHSH game with a probability that is impossible for any classical LHV strategy. A classical AIXI, whose models ν∈ M solare constrained by LHV, would never be able to predict or achieve this quantum level of success. QAIXI, with its quantum prior ΞQ, could learn to implement the optimal quantum strategy and understand that Q⋆is non-local. The presence of non-local correlations means that the percept et= (oA, oB, rt)contains components oAandoBthat are correlated in a way not explainable by any shared information in their common past light-cone (beyond the initial entanglement). This challenges classical notions of conditional independence typically used in agent-based convergence proofs. It can also break the martingale structure assumed in Theorem 1 unless over non-local Qare accounted for. More detail can be found in literature on quantum game theory [13,29,30] F.2 No-Cloning Theorem and QAIXI’s Sample Complexity The no-cloning theorem states that it is impossible to create an identical copy of an arbitrary unknown quantum state. This has consequences for how QAIXI can learn about its environment. Learning Firstly, because percepts upon which QAIXI learning is based are QTC channels, the no cloning theorem gives rise to differences in how QAIXI learning would occur: 1. When QAIXI performs an action at(especially if it’s a measurement) on the environment state ρQ E(a1:t), this interaction alters the state. The post-measurement state is different, and the specific instance of ρQ E(a1:t)is consumed in yielding the outcome ot. 2. Classical AIXI can, in principle, take a history h<tand test many hypothetical continuations with a given model νwithout altering the data h<t. Classical information can be copied and reused. 3. QAIXI cannot do this with quantum states. To know how Q⋆would have responded to a different action a′ tfrom the exact same instance of ρ⋆ E(a1:t−1), is impossible. It would need Q⋆ to produce that state (or an identical one) again. 26 E. Perrier Sample Complexity This one-shot nature of quantum measurement on unknown states impacts the sample complexity of learning by a
https://arxiv.org/abs/2505.21170v1
QAIXI agent: 1. To distinguish between different hypotheses Qabout the environment, or to estimate the ex- pectedoutcome/rewardfordifferentactions,QAIXIneedstoobservetheenvironment’sresponse multiple times. 2. Since each observation of a specific state instance is unique and unrepeatable, QAIXI effectively needs Q⋆to prepare a new instance of a comparable state for each piece of information it wants to gather about a particular type of situation or action. 3. If QAIXI wants to learn the full characteristics of ρ⋆ E(a1:t)through measurements, it is per- forming a form of quantum state tomography. Full tomography of an n-qubit state generally requires a number of measurement settings and repetitions that scales exponentially with n. While QAIXI’s goal is not necessarily full tomography but rather optimal action selection, its ability to learn the relevant features of Q⋆is still constrained by the information extractable per (unclonable) interaction. 4. As noted above, this means the learning rate is limited by state-preparation resources. If Q⋆ itself has computational costs or time delays associated with preparing states, this directly translates into a slower learning rate for QAIXI in terms of real-time or computational steps of Q⋆. The informational completeness condition (C2) for Theorem 1 ensures that measurements are infor- mative. However, no-cloning dictates that this information is gathered sequentially, one unclonable sample at a time. The 1/tconvergence rate for relative entropy and O(t−1/2)for trace distance are asymptotic statements about the number of interactions t. The practical time or resources needed to achieve a certain level of accuracy will be higher in quantum scenarios where each sample is precious and unrepeatable, compared to classical scenarios where data can be exhaustively analyzed. Bell non-locality thus expands the class of environments QAIXI must consider beyond classical capabil- ities, while the no-cloning theorem imposes a fundamental restriction on the efficiency with which QAIXI can extract information from its quantum world, directly impacting its sample complexity for learning. G. NOTATION 27 G Notation Glossary of Symbols at Action chosen by the agent at cycle t(classical label). ot, rt Observation and reward components of the percept. et= (ot, rt)Percept at cycle t. h<t Complete history up to but not including cycle t. A,O Action and observation alphabets. m Fixed lifetime / planning horizon. γ Discount factor in [0,1). µ, ν Classical (semi-computable) environments. K(ν) Classical Kolmogorov complexity of ν. ξU Classical Solomonoff priorX ν2−K(ν)ν(·). HA,HEHilbert spaces of the agent and environment registers. ρ(t) A, ρ(t) E Agent / environment density operators for cycle t. ρ(t) AE Joint state of agent and environment. ΦUatUnitary CPTP channel implementing a coherent action at. Iat={Eat k}kQuantum instrument representing a measurement action. Γobs Finite outcome alphabet of the instrument. Mat k Kraus / POVM operators of branch k. Qsol Set of chronological, semi-computable quantum environments. Q Individual quantum environment (CPTP channel family). |Q⟩ Choi–Jamiołkowski purification of an environment channel. KQ(Q)Quantum Kolmogorov complexity (Eq. 4). ΞQ(a1:m) Operator-valued universal mixture (semi-density op.) ξQ(e1:m∥a1:m)Scalarprobability obtained by projecting ΞQonto the instrument POVM. Uuniv Fixed universal quantum Turing machine. Vπ Q Discounted value of policy πin environment Q. Vπ ΞQValue averaged under the universal mixture (Eq. 8). πQAIXIOptimal policy that maximises Vπ ΞQ(Eq.
https://arxiv.org/abs/2505.21170v1
arXiv:2505.21171v1 [cs.CL] 27 May 2025M-Wanda: Improving One-Shot Pruning for Multilingual LLMs Rochelle Choenni1andIvan Titov1,2 University of Amsterdam1 University of Edinburgh2 r.m.v.k.choenni@uva.nl ,ititov@inf.ed.ac.uk Abstract Multilingual LLM performance is often criti- cally dependent on model size. With an eye on efficiency, this has led to a surge in interest in one-shot pruning methods that retain the ben- efits of large-scale pretraining while shrinking the model size. However, as pruning tends to come with performance loss, it is important to understand the trade-offs between multilingual- ity and sparsification. In this work, we study multilingual performance under different spar- sity constraints and show that moderate ratios already substantially harm performance. To help bridge this gap, we propose M-Wanda, a pruning method that models cross-lingual vari- ation by incorporating language-aware activa- tion statistics into its pruning criterion and dy- namically adjusts layerwise sparsity based on cross-lingual importance. We show that M- Wanda consistently improves performance at minimal additional costs. We are the first to explicitly optimize pruning to retain multilin- gual performance, and hope to inspire future advances in multilingual pruning.1 1 Introduction Large language models (LLMs) have demonstrated strong multilingual capabilities, with their abil- ity to process and generate text in numerous lan- guages improving substantially as the model size increases (He et al., 2024). This emergent mul- tilingualism can largely be attributed to the vast amount of multilingual data used for pretraining and the increased model capacity that allows for better generalization over linguistic patterns across multiple languages. However, the steep increase in model scale comes with substantial computa- tional and environmental costs, making efficient deployment in resource-constrained environments challenging (Ogueji et al., 2022). To address these challenges, model compression techniques, such as 1https://github.com/RochelleChoenni/M-Wanda .pruning, quantization, and distillation, have been widely explored to reduce model size while re- taining performance (Zhu et al., 2024). However, despite the effectiveness of such methods, their evaluation focuses mainly on maintaining English performance (Yang et al., 2024), with limited con- sideration of their impact on multilingual perfor- mance (Zeng et al., 2024; Kurz et al., 2024; Ogueji et al., 2022). Given that multilingual performance is crucial for equitable LLMs, ensuring that com- pression does not disproportionately harm perfor- mance in non-English languages is essential. In this paper, we study the effect of sparsity on the multilingual performance of six open-source LLMs of varying sizes. For model compression, we focus on a SOTA one-shot unstructured prun- ing method– Wanda (Sun et al., 2023)– and evalu- ate language modeling abilities and zero-shot task performance at varying sparsity levels across 15 languages and six downstream tasks. Our results show that Wanda, despite its strong performance in English, causes substantial degradation in mul- tilingual performance, particularly at sparsity lev- els higher than 50 %and in underrepresented lan- guages. These findings highlight an important lim- itation. As Wanda was developed to optimize for global importance, we hypothesize that it fails to account for cross-lingual variation in neuron impor- tance, despite being exposed to multilingual cali- bration data, which leads to the removal of weights that are important for specific languages. To help bridge this
https://arxiv.org/abs/2505.21171v1
gap, we propose a novel mul- tilingual pruning method, M-Wanda, which is a multilingual extension of Wanda. M-Wanda im- proves on Wanda by incorporating language-aware input activation statistics to better inform prun- ing decisions at minimal additional costs. More- over, M-Wanda dynamically adjusts sparsity ra- tios across layers based on cross-lingual correla- tion scores, ensuring that layers that are important for cross-lingual sharing are pruned less aggres- 1 sively. Together, these techniques allow us to better balance the contribution of shared and specialized neurons to weight importance. To the best of our knowledge, our work is the first to optimize pruning for multilingual retention, and to explicitly model cross-lingual activation variance and inter-language correlation to guide pruning decisions and layer- wise sparsity allocation. We show that M-Wanda consistently reduces per- plexity across all languages and that this translates into performance improvements on all downstream tasks. Importantly, we show that M-Wanda gener- alizes well beyond the set of languages included in the calibration data. In addition, we show that the techniques introduced in M-Wanda can also be integrated with RIA (Zhang et al., 2024), a more re- cent pruning method, thus showcasing their general usefulness in extending pruning to a multilingual setting. Finally, our findings highlight the need to evaluate pruning methods beyond English-centric compression benchmarks (Yang et al., 2024) and emphasize the importance of optimizing the prun- ing strategy to preserve multilingual performance. In doing so, we hope to contribute to the develop- ment of more efficient LLMs that remain effective in many languages. 2 Background and related work 2.1 Compression through pruning Pruning reduces model size by removing unneces- sary weights or neurons (LeCun et al., 1989) and can be grouped into iterative and one-shot meth- ods. Iterative methods (Frankle and Carbin, 2018; Blalock et al., 2020) repeatedly prune a small per- centage of weights, followed by retraining to re- cover performance, until a target threshold is met. While this does not require a predefined sparsity ra- tio, the additional training cycles can be expensive. One-shot methods, instead, remove a predefined fraction of weights in a single pass after the model is trained to convergence, and do not require retrain- ing (Frantar and Alistarh, 2023; Sun et al., 2023). It has gained popularity because of its simplicity and ability to maintain competitive performance. 2.2 One-shot pruning methods SparseGPT (Frantar and Alistarh, 2023) introduced one-shot pruning by sequentially processing model layers and solving a local quadratic optimization problem to minimize reconstruction error under sparsity constraints. Yet, this requires a weightupdate after pruning and backward passes for gra- dient computation. Sun et al. (2023) show that SparseGPT can be simplified to a gradient-free vari- ant that achieves competitive performance without the need for parameter updates, i.e. Wanda. Wanda method To determine the importance of model weights, Sun et al. (2023) propose to incor- porate the absolute weight value and the norm of the input activations of the neurons into the pruning criterion. Formally, let the input to a layer be de- noted by X∈R(N×T)×Cin, where Nis the batch size and Tthe sequence length.
https://arxiv.org/abs/2505.21171v1
The weight matrix W∈RCout×Cinconnects the input features to the output units. For each weight element Wi,j, the importance score is defined as: Si,j=|Wi,j| · ∥Xj∥2 (1) where ∥Xj∥2denotes the ℓ2-norm of the j-th in- put feature column across all N×Ttokens. This score reflects the contribution of each weight based on both its magnitude and the aggregated strength of the corresponding input feature. Note that, col- lecting input activation statistics requires a set of input samples which we refer to as calibration data . Finally, a strong commonly used baseline is magni- tude pruning (Han et al., 2015), in which only the weight magnitude: Si,j=|Wi,j|is considered. Layerwise sparsity Sparsity allocation methods were developed to mitigate model degradation by enforcing different sparsity ratios across layers, and have been shown to improve performance (Li et al., 2024; Huang et al., 2025). Rather than pruning uni- formly, such methods estimate how much to prune based on the layers’ sensitivity or redundancy. Con- cretely, given a global sparsity ratio R, the goal is to derive a set of target layerwise sparsity ratios [r0,r1, ..,rL] such that:1 L+1PL n=0rn=R. One such method is Outlier Weighted Layerwise sparsity (OWL) (Yin et al., 2024), which uses per- layer outlier counts (i.e. activations that exceed Mtimes the mean) as global importance scores C=[c0,c1, .. ,cL] for allocating sparsity. To prevent extreme imbalances between layers, they introduce a hyperparameter γthat restricts each ratio to fall within a small interval around the global sparsity rate, specifically rn∈[R−γ, R+γ], while maintaining a mean sparsity ratio of Racross all layers. To achieve this, the raw importance scores Care rescaled to the range [ 0,2λ] and shifted so that the resulting values are centered around R. 2 Following the intuition that layers that are more important should be pruned less, the sparsity ratios are then defined as: rn= 1−cn. 2.3 Multilingual pruning Ogueji et al. (2022) first studied the effect of model pruning on multilingual performance. How- ever, their scope was limited to iterative meth- ods, smaller models, and a single task. More recently, Zeng et al. (2024); Kurz et al. (2024) studied multilingual performance of LLMs using SparseGPT and Wanda. They both study how vary- ing the composition of calibration data from differ- ent languages affect performance, and show that using a mixture of languages yields better results. However, both studies are limited to modifying the calibration data, without altering the pruning method itself, and restricting analysis to compres- sion at 50%. In this work, we show that 50% sparsity already substantially harms multilingual performance. Moreover, this sparsity ratio is en- forced uniformly across model layers, despite sub- stantial evidence that model layers play different roles in language-specific and cross-lingual pro- cessing (Tang et al., 2024; Kojima et al., 2024). We, instead, study multilingual performance under dif- ferent sparsity constraints and introduce M-Wanda, a novel pruning method that builds on Wanda by using multilingual calibration data, incorporating language-aware scoring, and combining it with an OWL-inspired dynamic sparsity allocation method. 3 M-Wanda method While Zeng et al. (2024); Kurz et al. (2024) show that using Wanda
https://arxiv.org/abs/2505.21171v1
with multilingual calibration data improves performance, Wanda was developed to preserve weights that are globally important, and by averaging input activations across languages, we might suppress language-specific signals that are essential for multilingual retention. Thus, we enhance Wanda in three key ways: (1) We assign layerwise sparsity based on the degree of cross- lingual activation similarity, applying less aggres- sive pruning to layers that are more important for cross-lingual sharing. (2) We incorporate cross- lingual activation variance into the pruning crite- rion to encourage retention of specialized neurons that might, for instance, support underrepresented or typologically distinct languages. (3) We intro- duce an activation probability term to discourage retention of high-variance neurons that rarely acti-vate, helping to filter out noisy or spurious features. Together, these additions bias pruning toward pre- serving both shared and consistently active spe- cialized neurons, thereby improving multilingual retention at minimal additional costs. 3.1 Correlation Weighted Layerwise (CWL) sparsity We introduce Correlation Weighted Layerwise (CWL) sparsity to guide sparsity allocation deci- sions across model layers. In contrast to OWL (Yin et al., 2024), which scores layer importance based on outlier counts, CWL uses Pearson correlation coefficients to approximate activation similarity both across and between languages to determine importance. We hypothesize that layers that exhibit high inter-language activation similarity are more involved in cross-lingual sharing and better facili- tate multilingual generalization. As such, we apply less aggressive pruning to them. However, when intra-language correlation scores are low, this sug- gests instable or noisy representations. To correct for this, we adjust the inter-language correlation score using intra-language scores. Concretely, we first compute Pearson correlation scores between the mean input activation (aggre- gated across tokens) for each language and sublayer µ(k) ℓof the attention or MLP block.2To compute the average inter-language correlation score for sub- layer kand a set of languages L, we then take the mean of all pairwise correlations: Inter(k)=2 |L|(|L| − 1)X i<jcorr(µ(k) ℓi, µ(k) ℓj)(2) Moreover, we adjust inter-language scores using intra-language scores, by assigning more impor- tance when both are high, yielding: c(k)=Inter(k)·X ℓ∈LIntra(k) ℓ(3) This score reflects how shared representations are between languages and how stable they are within languages. To obtain a single importance score for each layer n: [c0,c1, ..cL], we take the average over all sublayers. We then apply the same proce- dure as OWL, described in Section 2.2, to ensure that the mean sparsity is equal to the global ratio R, and assign layers with more importance, lower ratios: rn= 1−cn. We find that setting λto0.04 generally works well across LLMs. 2Note that input activations are shared between query, key and value, and between the MLP gate and up projection layers. 3 3.2 Cross-lingual activation variance Recall from Eq 1 that Wanda incorporates both weight importance and activation strength. We now enhance the activation scores by storing the mean of activation values per language µℓand computing the variance in neuron activation across languages: Varinter=1 |L|X ℓ∈L(µℓ−¯µ)2(4) By adding this inter-language variance score, we give more importance to neurons whose input acti- vations show highly variable responses across lan- guages, meaning that
https://arxiv.org/abs/2505.21171v1
they might be very important tosome specific languages. Yet, the input activations within a language also fluctuate between input samples. If the intra- language variance is high, it introduces noise into our pruning metric, making the inter-language vari- ance less reliable. Therefore, we assess how much neuron activation varies between languages relative to how much it varies within individual languages: V AR =Varinter 1 |L|PL ℓ=1Varℓ intra(5) This means that we assign higher scores to neurons that exhibit high inter-language variance but low intra-language variance. AXj=∥Xj∥2+λ·V AR (6) To balance the trade-off between language- specificity and generalization, we add a scaling termλfor which the optimal value is found through a grid search. Also, note that before adding vari- ance scores, we apply min-max normalization. 3.3 Activation probability Finally, we correct the overall weight importance scores based on the average activation probability across languages. This is motivated by the idea that high-variance neurons that are upweighted by Eq. 6, but rarely activate, are noisy and should be filtered out. To compute this activation probabil- ity, we simply count how many times the input activations are higher than some threshold value ϵ. Given that recent LLMs rely on activation func- tions that also allow for negative activation that can contain meaningful information, we consider abso- lute activation values.3As such, we end up with 3Using absolute values was found to work better than posi- tive ones, showing that negative signals carry information.the following pruning metric: Si,j= (|Wi,j| ·AXj)·P(I(abs(Xj)> ϵ))(7) where I(·)is the indicator function. 4 Experiments Calibration and test languages For calibration and evaluation we use 15 languages: English, Ger- man, Spanish, French, Italian, Portuguese, Hindi, Russian, Korean, Japanese, Vietnamese, Chinese, Indonesian, Turkish, Arabic. These languages be- long to 8 different families, 11 sub-families and cover 7 writing scripts. Calibration data Following prior work, we use 128 random samples of 2048 tokens from the mulitlingual C4 (MC4) dataset for calibration (Raf- fel et al., 2020). While recent studies show that the data source affects pruning quality (Williams and Aletras, 2024; Ji et al., 2024; Bandari et al., 2024), we use MC4 to limit the scope of this work and ensure comparability with existing literature. To adhere to the 128 samples maximum, our calibra- tion data includes 16 samples from English and 8 from all other test languages. Models We study six open-source LLMs at different model sizes: Llama3 (1B, 3B and 8B) (Grattafiori et al., 2024), Aya-23 (8B) (Dang et al., 2024), OLMo-7B (Groeneveld et al., 2024) and Bloomz-7b1 (Muennighoff et al., 2023). Zero-shot performance We perform zero-shot evaluation on six tasks that test the LLMs ability on reasoning (Xstorycloze, Xcopa) (Ponti et al., 2020), coreference resolution (Xwinograd) (Muen- nighoff et al., 2023), reading comprehension (Lam- bada) (Paperno et al., 2016), natural language un- derstanding (XNLI) (Conneau et al., 2018), and paraphrasing (PAWS-X) (Yang et al., 2019). For consistent evaluation we employ the eleuther-AI evaluation harness.4 Model perplexity We test general language mod- eling abilities by measuring perplexity on datasets different from the one used for calibration. Specifi- cally, we evaluate perplexity on the entire
https://arxiv.org/abs/2505.21171v1
Flores- 101 ( dev+devtest ) dataset which contains parallel data from Wikipedia. To test whether our results are robust across different domains, we also eval- uate on the XL-Sum dataset that contains high- quality articles from BBC (Hasan et al., 2021). 4https://github.com/EleutherAI/ lm-evaluation-harness 4 endeesitpt hi ru ja ko vi id artrfrzh0 50 100 150 200 endeesitpt hi ru ja ko vi id artrfrzh0 10 20 30 40 50 60 70 80 endeesitpt hi ru ja ko vi id artrfrzh0 10 20 30 40 50 60 0.00–0.35 0.35–0.40 0.40–0.45 0.45–0.50 0.50–0.55 LLaMA-1B LLaMA-3B LLaMA-8BFigure 1: The effect of Wanda pruning under different sparsity ratios on the perplexity of each calibration language. Colored areas denote the increase in perplexity when increasing the sparsity ratio. Note that the perplexity scores are on different scales across models. 5 9 5 96 8 3 9 3 45 0 5 1 5 5 5 56 5 3 4 3 44 9 4 9 5 3 5 35 7 2 53 44 9 4 5 6 56 37 5 4 6 3 85 25 6 6 1 6 17 1 4 2 3 45 05 3 5 7 5 86 9 3 4 3 54 9 5 0 7 0 6 77 6 5 04 85 96 2 6 4 6 47 6 4 8 4 95 9 6 0 6 0 6 17 4 4 24 55 7 5 7 Xcopa Xstory Xwino Lambada XNLI P A WS-X A vg.020406080 Size-Sparsity 1B-0% 1B-40% 1B-50% 3B-0% 3B-40% 3B-50% 8B-0% 8B-40% 8B-50% Accuracy Figure 2: Performance in accuracy ( %) given different sparsity ratios used on different sizes of Llama3. Zero-shot results are averaged across test languages per downstream task. 5 Results In Section 5.1, we first show how pruning under different sparsity constraints affects multilingual LLMs of different sizes. Motivated by these find- ings, we show in Section 5.2 how our M-Wanda method can help mitigate some of the multilingual performance loss induced by pruning. 5.1 Wanda’s impact on multilinguality We prune our models using Wanda with sparsity ratios between 35 and 60 %at 5 percent intervals. In Figure 1, we see that across all languages and different sizes of Llama, the perplexity has already substantially increased when going from 45 to 50 % sparsity (red area), especially on underrepresented languages (typically not from the Indo-European family). This sheds doubt on the common practice of adopting the default sparsity ratio of 50 %in the multilingual setting (Zeng et al., 2024; Kurz et al., 2024). Importantly, this same degradation is not found in English when only using English calibration data (see Appendix D), the setting used in the original paper (Sun et al., 2023). Similarly, a clear degradation across all down- stream tasks is visible when going from 40 to 50 %sparsity; see Figure 2. In fact, when studying how larger models pruned to 50 %of their original ca- pacity compared to their smaller dense counterparts (i.e. Llama 3B at 50 %versus Llama 1B and Llama 8B at 50 %versus Llama 3B), we see that they are not able
https://arxiv.org/abs/2505.21171v1
to outperform them despite still having a larger capacity. 5.2 Improvements with M-Wanda In Section 5.1, we show that Wanda with 50% spar- sity leads to a substantial drop in multilingual per- formance. This degradation highlights an area of potential improvement for M-Wanda, and we hy- pothesize that more optimally balancing the im- portance between specialized and shared neurons would allow us to better retain multilingual per- formance. In Table 1, we show how M-Wanda is able to reduce the average perplexity across lan- guages for all models on the Flores dataset (see Appendix A for the optimal hyperparameters se- lected for each model and Appendix B for results on XL-Sum).5Moreover, we find that this holds across different model sizes. Importantly, while 5Perplexity from magnitude pruning is notably higher on Llama. We find that performance is reasonable in English, yet explodes on other languages, yielding high average scores. 5 Method Llama3-1B Llama3-3B Llama3-8B Aya-23-8B Bloomz-7b1 OLMo-7B Magnitude 17605 1579 403 36.12 29.64 33.55 RIA∗71.75 27.88 20.45 25.28 24.05 30.45 Wanda 63.29 26.42 19.59 24.34 24.71 23.23 M-Wanda 59.56 (6%↓)24.56 (7%↓)18.57 (5%↓)23.87 (2%↓) 24.32 (2 %↓)21.54 (7%↓) Table 1: Average perplexity on Flores across all calibration languages at a sparsity ratio of 50 %. For M-Wanda, we also report the relative percentage decrease compared to Wanda.∗Refer to Section 7 for an introduction to RIA. Xcopa Xstory Xwino Lambada XNLI PAWS-X Avg. Wanda 60.36 60.86 73.81 42.08 45.07 57.43 56.60 M-Wanda 61.16 61.49 74.54 44.68 46.51 58.23 57.77 Table 2: Average performance (%) on downstream tasks when using Wanda versus M-Wanda on Llama-8B. lower perplexity does not always guarantee better performance on downstream tasks, in Table 2 we show that the improvements achieved by M-Wanda are substantial enough to improve performance in all six downstream tasks. When taking a closer look at the effect of M- Wanda on individual test languages in Figure 3, we see that M-Wanda consistently reduces the perplex- ity on all 15 languages for Llama-8B. Moreover, we see that languages most typologically different from English, i.e. Arabic, Turkish, Vietnamese, Chinese, Korean and Japanese, obtain larger abso- lute gains from M-Wanda than the Indo-European languages. Similarly, when looking at downstream task improvements, we find that M-Wanda tends to consistently improve performance on all individ- ual test languages, with a few exceptions (mostly English), see Appendix F for full results. 5.2.1 Generalization to unseen languages Previously, we used the same set of languages for calibration and testing. We now test whether the performance improvements also generalize beyond our calibration languages. As such, we use 15 different languages for evaluation: Czech, Polish, Ukranian, Bulgarian, Tamil, Marathi, Urdu, Ben- gali, Kannada, Gujarati, Javanese, Thai, Swahili, Zulu, and Persian. In Figure 4, we show that our method consistently reduces perplexity across all languages, despite them not being included in the calibration set. Specifically, M-Wanda results in a 6%decrease in average perplexity compared to Wanda (18.98 versus 20.09), which is higher than on the calibration languages itself. This is likely because our unseen languages include many more underrepresented languages, and our method seems enesfrdehi it pt
https://arxiv.org/abs/2505.21171v1
ru ar tr vi zhkojaid10 15 20 25 30W anda M-W andaFigure 3: Perplexity scores per language from Llama- 8B pruned using Wanda versus M-Wanda. to book larger performance gains on those. Impor- tantly, however, this suggests that M-Wanda more generally helps to preserve language variance and is not only adjusting to the language-specific pat- terns of the calibration languages. 5.2.2 Effectiveness at different sparsity levels While we already saw that M-Wanda improves per- formance across different model sizes, we now also test its effectiveness across different sparsity ratios. In Figure 5, we plot the average perplexity scores obtained with Wanda and M-Wanda at different sparsity levels. We see that M-Wanda remains ef- fective at higher ratios and that the average im- provement of M-Wanda over Wanda increases sub- stantially when applying more aggressive pruning. At the extreme sparsity ratio of 70 %we find that M-Wanda reduces average perplexity by as much as 52%(see Appendix C for full results). 5.2.3 Robustness analysis Sensitivity to calibration samples While we limited the scope of this paper to randomly select- ing calibration samples from the MC4 dataset, we 6 7.5% 5.2% 3.1% 2.6%3.8%7.4% 4.8% 4.4%6.2%7.3% 3.5%3.5%5.7% 4.3%5.8% 4.8%7.9% 5.4%6.8% 4.6%4.8%8.5% 7.8% 5.4%6.7%10.2% 7.7% 4.6%7.0% 3.7% ar de en es fr hi id it ja ko pt ru tr vi zh bg bn cs fa gu jv kn mr plsw ta th uk ur zu0246810 Calibration UnseenPercentage decrease (%)Figure 4: Relative percentage decrease in perplexity when using M-Wanda compared to Wanda for all 15 calibration and 15 unseen test languages. Results are reported for Llama-8B. 0.4 0.45 0.5 0.55 0.610203040506070 Wanda M-Wanda Sparsity ratioPeplexity Figure 5: Average perplexity scores across languages as an effect of higher sparsity ratios when applying Wanda and M-Wanda to Llama-8B. now also test the sensitivity of our method to the calibration set. Specifically, we use 3 random seeds to select calibration data and recompute average perplexity using Wanda versus M-Wanda. We find that across all three runs, M-Wanda outperforms Wanda. On average Wanda obtains a perplexity of 19.37±0.27and M-Wanda 18.63 ±0.22. Sensitivity to calibration languages Finally, we now study how selecting different subsets of lan- guages from the full calibration set affects perfor- mance. These subsets vary both in size, which impacts the number of samples per language and, consequently, the robustness of language-specific signals, and in their typological composition, which might influence how well calibration generalizes across languages. To study this, we draw multi- ple random subsets of languages for calibration. Specifically, we each time sample 5 unique subsets of size muniformly at random from the set of all languages L:S(i) m∼Unif({S⊂ L:|S|=m}). In Figure 6, we plot the average perplexity on the full calibration set Las a function of the typolog- ical diversity of the calibration languages in the different language subsets of size m. These di- versity scores are defined as the mean of pairwise 0.46 0.48 0.5 0.52 0.5418.518.5518.618.6518.718.7518.818.85Size m 15 14 13 12 11 10 Typological Diversity ScorePerplexityFigure 6: Average M-Wanda perplexity on Llama-8B as a function of the typological coverage of the calibration languages. Subsets
https://arxiv.org/abs/2505.21171v1
are colored based on their size m. cosine similarity between their URIEL language representations (Malaviya et al., 2017).6In gen- eral, we find that higher typological diversity leads to better performance. However, we also observe that a few language subsets can outperform the full calibration set, suggesting that optimal calibration may depend more on carefully selecting which lan- guages are chosen than on increasing its size. 6 Ablation study To understand where the performance improve- ments of M-Wanda come from, we now perform an ablation study, isolating the impact of individual en- hancements that were added to the original Wanda method. When we combine the original Wanda metric with OWL instead of CWL allocation, we find that OWL7reduces average perplexity to a lesser extent and for the 1B model even worsens it, see Table 3. Importantly, we also find that en- hancing Wanda +OWL with cross-lingual variation and activation probability further improves perfor- 6We use syntax_knn features from the Lang2Vec library. 7We used optimal hyperparameters reported in the original paper i.e., M∈[3,5]andγ= 0.08, but found γ= 0.04, as used for CWL, works better and thus report scores using the latter for a more fair comparison. 7 0 5 10 15 20 25 300.450.460.470.480.490.50.510.520.53 Llama-8B Aya-8B Layer indexSparsity ratioFigure 7: Layerwise sparsity allocation using CWL. Wanda Wanda +OWL Wanda +CWL M-Wanda 1B 63.29 65.10 60.50 59.56 3B 26.42 26.02 24.61 24.56 8B 19.59 19.14 18.61 18.57 Table 3: M-Wanda ablation on the Llama3 models. We report average perplexity on the calibration languages. mance to 19.09 on LLama-8B. This shows that our proposed enhancements can more generally be paired with different allocation methods and do not work exclusively in combination with CWL. Moreover, in Figure 7 we plot the sparsity ratio allocated per layer for Llama-8B and Aya-8B using CWL. In general, we see that the lower layers and the last few top layers receive less aggressive prun- ing. The fact that this results in better multilingual performance can likely be connected to the fact that these layers have been shown to be more involved in cross-lingual processing (Zhao et al., 2024) (see Appendix E for allocation results using OWL). 7Extendability to other pruning methods Relative Importance and Activations (RIA) is a SOTA pruning method that has been shown to out- perform Wanda (Zhang et al., 2024). It aims to improve upon Wanda by re-evaluating the impor- tance of each weight element Wijbased on all connections that originate from the input neuron i or lead to the output neuron j: RIA i,j=RIi,j·(∥Xj∥2)α =|Wi,j|P|W∗j|+|Wi,j|P|Wi∗| ·(∥Xj∥2)α (8) whereP|W∗j|andP|W∗i|sum over the absolute values of the weights in input channel jand out- put channel irespectively. Yet, while we find that RIA outperforms Wanda on English at 50 %spar- sity (25.05 versus 25.16 on Llama-8B), the averageRIA M-RIA Llama3.2-1B 71.75 66.22 (8%↓) Llama3.2-3B 27.88 25.53 (8%↓) Llama3.1-8B 20.45 19.03 (7%↓) Aya-23-8B 25.28 24.82 (2%↓) Bloomz-7b1 24.05 23.65 (2%↓) OLMo-7B 30.45 26.21 (14%↓) Table 4: Average perplexity scores on the calibration languages for Flores using RIA ( α=0.5) at 50 %sparsity. perplexity across all 15 calibration languages tends to increase instead
https://arxiv.org/abs/2505.21171v1
(20.45 versus 19.59 on Llama- 8B). This further highlights the need for multilin- gual evaluation of pruning methods. Nonetheless, to test the compatibility of our proposed method with different pruning criterion, we now add cross- lingual variance and activation probability to RIA and apply CWL to obtain layerwise sparsity, yield- ing M-RIA: Si,j= (RIi,j·AXj)·P(I(|Xj|> ϵ)) where AXj= (∥Xj∥2)α+λ·V AR (9) Note that we adopt α=0.5 which Zhang et al. (2024) found to be optimal for various LLMs. In Table 4 we show how M-RIA is also able to consistently im- prove over RIA, nicely demonstrating the general advantage of our proposed method for adaptation to a multilingual setting. 8 Conclusion In this paper, we shed light on the limitations of SOTA pruning methods in a multilingual setting and introduce M-Wanda, a novel pruning method that explicitly models cross-lingual variation in weight importance. By incorporating language- aware activation statistics and adaptive sparsity al- location, M-Wanda substantially improves multilin- gual retention over existing methods, particularly for underrepresented languages and at high sparsity ratios. Our results show that multilingual pruning requires strategies that go beyond global impor- tance scoring and highlight the benefits of consider- ing the importance of specialized neurons. We hope that these insights help advance the state of multi- lingual pruning by underscoring the broader need for multilingual evaluation and design in LLM spar- sification, and inspire new directions to improve multilingual pruning beyond the modification of calibration data. 8 9 Limitations Our improvements to the original Wanda method come at minimal additional computational costs. Specifically, we only compute additional statistics from the activation inputs that would already need to be collected for the original method. Moreover, we still use 128 calibration samples in total across all of our calibration languages. However, while we show that input activation statistics can help inform pruning decisions, unlike Wanda, M-Wanda does require tuning of the hyperparameters. To alleviate the need for manual tuning, future work could investigate how hyperparameters could au- tomatically be adjusted based on the scale of the weights and activations. In addition, we limited the scope of this project to studying unstructured pruning, the setting for which Wanda was originally developed. However, Sun et al. (2023) show that Wanda can also be ex- tended to structured N:M pruning, which requires at most N out of every M contiguous weights to be non-zero (e.g. 2:4 or 4:8) (Mishra et al., 2021). While this usually results in lower performance, it is more amenable to practical inference speed- ups. Thus, future work should investigate how the core ideas behind M-Wanda generalize to struc- tured pruning settings. Acknowledgement This work is supported by the Dutch National Sci- ence Foundation (NWO Vici VI.C.212.053). References Abhinav Bandari, Lu Yin, Cheng-Yu Hsieh, Ajay Jaiswal, Tianlong Chen, Li Shen, Ranjay Krishna, and Shiwei Liu. 2024. Is C4 Dataset Optimal for Pruning? An Investigation of Calibration Data for LLM Pruning. In Proceedings of the 2024 Confer- ence on Empirical Methods in Natural Language Processing , pages 18089–18099. Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, and John Guttag. 2020. What is
https://arxiv.org/abs/2505.21171v1
the state of neural network pruning? Proceedings of machine learning and systems , 2:129–146. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating Cross- lingual Sentence Representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing , pages 2475–2485. John Dang, Shivalika Singh, Daniel D’souza, Arash Ahmadian, Alejandro Salamanca, Madeline Smith,Aidan Peppin, Sungjin Hong, Manoj Govindassamy, Terrence Zhao, et al. 2024. Aya expanse: Combin- ing research breakthroughs for a new multilingual frontier. arXiv preprint arXiv:2412.04261 . Jonathan Frankle and Michael Carbin. 2018. The Lot- tery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks. In International Conference on Learning Representations . Elias Frantar and Dan Alistarh. 2023. Sparsegpt: Mas- sive language models can be accurately pruned in one-shot. In International Conference on Machine Learning , pages 10323–10337. PMLR. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, et al. 2024. The llama 3 herd of mod- els.arXiv preprint arXiv:2407.21783 . Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bha- gia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muen- nighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Nis- hant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, and Han- naneh Hajishirzi. 2024. OLMo: Accelerating the Science of Language Models. Preprint . Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. Advances in neural infor- mation processing systems , 28. Tahmid Hasan, Abhik Bhattacharjee, Md Saiful Is- lam, Kazi Mubasshir, Yuan-Fang Li, Yong-Bin Kang, M Sohel Rahman, and Rifat Shahriyar. 2021. XL- Sum: Large-Scale Multilingual Abstractive Summa- rization for 44 Languages. In Findings of the Associ- ation for Computational Linguistics: ACL-IJCNLP 2021 , pages 4693–4703. Yifei He, Alon Benhaim, Barun Patra, Praneetha Vad- damanu, Sanchit Ahuja, Parul Chopra, Vishrav Chaudhary, Han Zhao, and Xia Song. 2024. Scal- ing Laws for Multilingual Language Models. arXiv preprint arXiv:2410.12883 . Weizhong Huang, Yuxin Zhang, Xiawu Zheng, Fei Chao, and Rongrong Ji. 2025. Determining Layer-wise Sparsity for Large Language Models Through a Theoretical Perspective. arXiv preprint arXiv:2502.14770 . Yixin Ji, Yang Xiang, Juntao Li, Qingrong Xia, Ping Li, Xinyu Duan, Zhefeng Wang, and Min Zhang. 2024. Beware of Calibration Data for Pruning Large Language Models. The Thirteenth International Con- ference on Learning Representations . 9 Takeshi Kojima, Itsuki Okimura, Yusuke Iwasawa, Hit- omi Yanaka, and Yutaka Matsuo. 2024. On the Mul- tilingual Ability of Decoder-based Pre-trained Lan- guage Models: Finding and Controlling Language- Specific Neurons. In Proceedings of the 2024 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies (Volume 1: Long Papers) , pages 6919–6971. Association for Computational Linguis- tics. Simon Kurz, Jian-Jia Chen, Lucie
https://arxiv.org/abs/2505.21171v1
Flek, and Zhixue Zhao. 2024. Investigating Language-Specific Cal- ibration For Pruning Multilingual Large Language Models. arXiv preprint arXiv:2408.14398 . Yann LeCun, John Denker, and Sara Solla. 1989. Opti- mal brain damage. Advances in neural information processing systems , 2. Lujun Li, Peijie Dong, Zhenheng Tang, Xiang Liu, Qiang Wang, Wenhan Luo, Wei Xue, Qifeng Liu, Xiaowen Chu, and Yike Guo. 2024. Discovering sparsity allocation for layer-wise pruning of large language models. Advances in Neural Information Processing Systems , 37:141292–141317. Chaitanya Malaviya, Graham Neubig, and Patrick Lit- tell. 2017. Learning Language Representations for Typology Prediction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing , pages 2529–2535. Asit Mishra, Jorge Albericio Latorre, Jeff Pool, Darko Stosic, Dusan Stosic, Ganesh Venkatesh, Chong Yu, and Paulius Micikevicius. 2021. Accelerat- ing sparse deep neural networks. arXiv preprint arXiv:2104.08378 . Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng Xin Yong, Hailey Schoelkopf, et al. 2023. Crosslingual Generalization through Multitask Finetuning. In The 61st Annual Meeting Of The Association For Computational Lin- guistics . Kelechi Ogueji, Orevaoghene Ahia, Gbemileke Onilude, Sebastian Gehrmann, Sara Hooker, and Julia Kreutzer. 2022. Intriguing Properties of Compres- sion on Multilingual Models. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing , pages 9092–9110. Denis Paperno, Germán Kruszewski, Angeliki Lazari- dou, Ngoc-Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers) , pages 1525–1534. Edoardo Maria Ponti, Goran Glavaš, Olga Majewska, Qianchu Liu, Ivan Vuli ´c, and Anna Korhonen. 2020.XCOPA: A Multilingual Dataset for Causal Com- monsense Reasoning. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP) , pages 2362–2376. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a unified text-to-text transformer. Journal of machine learning research , 21(140):1–67. Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter. 2023. A simple and effective pruning approach for large language models. The Twelfth International Conference on Learning Representations . Tianyi Tang, Wenyang Luo, Haoyang Huang, Dongdong Zhang, Xiaolei Wang, Wayne Xin Zhao, Furu Wei, and Ji-Rong Wen. 2024. Language-Specific Neurons: The Key to Multilingual Capabilities in Large Lan- guage Models. In Proceedings of the 62nd Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers) , pages 5701–5715. Miles Williams and Nikolaos Aletras. 2024. On the Impact of Calibration Data in Post-training Quanti- zation and Pruning. In Proceedings of the 62nd An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 10100– 10118. Ge Yang, Changyi He, Jinyang Guo, Jianyu Wu, Yifu Ding, Aishan Liu, Haotong Qin, Pengliang Ji, and Xianglong Liu. 2024. LLMCBench: Benchmarking Large Language Model Compression for Efficient Deployment. In The Thirty-eight Conference on Neu-
https://arxiv.org/abs/2505.21171v1
ral Information Processing Systems Datasets and Benchmarks Track . Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A Cross-lingual Ad- versarial Dataset for Paraphrase Identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) , page 3687. Associa- tion for Computational Linguistics. Lu Yin, You Wu, Zhenyu Zhang, Cheng-Yu Hsieh, Yaqing Wang, Yiling Jia, Gen Li, Ajay Kumar Jaiswal, Mykola Pechenizkiy, Yi Liang, et al. 2024. Outlier Weighed Layerwise Sparsity (OWL): A Miss- ing Secret Sauce for Pruning LLMs to High Sparsity. InInternational Conference on Machine Learning , pages 57101–57115. PMLR. Hongchuan Zeng, Hongshen Xu, Lu Chen, and Kai Yu. 2024. Multilingual Brain Surgeon: Large Lan- guage Models Can Be Compressed Leaving No Lan- guage behind. In Proceedings of the 2024 Joint In- ternational Conference on Computational Linguis- tics, Language Resources and Evaluation (LREC- COLING 2024) , pages 11794–11812. 10 Yingtao Zhang, Haoli Bai, Haokun Lin, Jialin Zhao, Lu Hou, and Carlo Vittorio Cannistraci. 2024. Plug- and-play: An efficient post-training pruning method for large language models. In The Twelfth Interna- tional Conference on Learning Representations . Yiran Zhao, Wenxuan Zhang, Guizhen Chen, Kenji Kawaguchi, and Lidong Bing. 2024. How do Large Language Models Handle Multilingualism? In The Thirty-eighth Annual Conference on Neural Informa- tion Processing Systems . Xunyu Zhu, Jian Li, Yong Liu, Can Ma, and Weiping Wang. 2024. A survey on model compression for large language models. Transactions of the Associa- tion for Computational Linguistics , 12:1556–1577. 11 A Hyperparameter selection λ ϵ γ CWL block Llama3-1B 0.2 5e-5 0.04 attn Llama3-3B 0.02 1e-7 0.04 attn Llama3-8B 0.2 5e-5 0.04 attn Aya-23-8B 0.2 1e-7 0.04 MLP OLMo-7B 0.2 1e-7 0.04 MLP Bloomz-7b1 6 0 0.01 attn Table 5: Optimal hyperparameters found for M-Wanda after a small search λ∈[0.02,0.2],ϵ∈[5e-5, 1e-7 ], γ= [0.01,0.04]and CWL block ∈[attn,MLP]. In Table 5, we report the optimal hyperparameters used when applying M-Wanda to each model. Note that these are the hyperparameters used for both the results on the Flores dataset, reported in Table 1, and the XL-Sum dataset, reported in Table 6. We find that, generally, λ= 0.2andγ= 0.04work well on most LLMs. However, interestingly, the optimal hyperparameter value for λon Bloomz-7b1 is on a completely different scale, so we ran a different grid search for this model: λ∈[6,12]. Moreover, we noticed that input activations from Bloomz-7b1 are sometimes equal to zero, resulting in suboptimal performance when applying the activation probabilities. Thus, while we found that this model benefits from cross-lingual variance and CWL, activation probability should be disabled ( ϵ= 0) for the improvements reported in the main paper. In addition, Wanda +OWL results reported in Table 3, required the tuning of the hyperparameter M. We found that for Llama 1B and 3B M= 3is optimal, yet for Llama 8B M= 5yields better results. Similarly, while γ= 0.08was reported to generally work best with OWL, we found that γ= 0.04, as used for CWL, performed better. As such, those are the values used for the results reported
https://arxiv.org/abs/2505.21171v1
in the table. B XL-Sum results Wanda M-Wanda Llama3.2-1B 40.30 36.88 (8%↓) Llama3.2-3B 16.40 15.61 (5%↓) Llama3.1-8B 12.18 11.57 (5%↓) Aya-23-8B 15.46 15.14 (2%↓) Bloomz-7b1 20.35 17.40 (14%↓) OLMo-7B 15.28 14.34 (6%↓) Table 6: Average perplexity scores on the XL-Sum validation set across 13/15 languages at a sparsity ratio of 50 %. We use 500 articles for each language. Note: German and Italian are not covered by this dataset. 12 C Effectiveness of M-Wanda at different sparsity ratios Sparsity 30 % 35% 40% 45% 50% 55% 60% 65% 70% Wanda 12.12 12.62 13.52 15.31 19.63 32.27 73.02 174.70 1743.14 M-Wanda 12.12 12.60 13.43 15.02 18.57 28.46 58.96 159.48 835.63 Table 7: Average perplexity scores of Wanda and M-Wanda across different sparsity levels. For reference, the average performance of the full model is 11.38. Results are reported on Llama-8B. D English-centric pruning 0.0 0.1 0.2 0.3 0.4 0.5 Pruning Ratio20406080100120140Perplexity English-1Ben es fr de hi it pt ru ar tr vi zh ko ja id 0.0 0.1 0.2 0.3 0.4 0.5 Pruning Ratio20406080100120140Perplexity English-3B 0.0 0.1 0.2 0.3 0.4 0.5 Pruning Ratio20406080100120140Perplexity English-8B Figure 8: The effect of higher sparsity ratio’s on the perplexity across languages. The calibration data is fully in English and perplexity is measured on the Flores dataset. Results are reported on the Llama3 models. E Sparsity allocation with OWL 0 5 10 15 20 25 300.460.480.50.520.54 Llama-8B Aya-8B Layer indexSparsity ratio Figure 9: Layerwise sparsity allocation by OWL. F Downstream task results per language 13 Lang. Wanda M-Wanda XCOPAid 58.8 60.4 it 62.8 63.2 tr 57.8 58.6 vi 59.6 61.0 zh 62.8 62.6 XStoryar 52.9 53.8 en 72.0 71.8 es 65.0 65.2 hi 56.8 57.8 id 58.7 60.3 ru 61.7 61.9 zh 58.9 59.6 XWinograden 86.7 86.0 fr 69.9 69.9 pt 73.8 74.1 ru 66.7 68.9 zh 72.0 73.8 Lambadade 34.4 36.3 en 69.2 71.9 es 19.9 22.8 fr 43.3 45.8 it 43.6 46.6 XNLIar 32.8 33.2 de 50.5 51.9 en 55.4 55.2 es 50.3 52.3 fr 51.1 52.3 hi 43.9 46.5 ru 43.8 47.3 tr 44.8 45.6 vi 45.1 45.3 zh 33.0 35.5 PAWS-Xde 63.3 64.7 en 65.6 65.7 es 58.4 61.1 fr 58.5 58.6 ja 56.1 53.2 ko 49.3 51.2 zh 50.8 53.1 Table 8: Per-language accuracy (%) for each downstream task using Wanda and M-Wanda on Llama-8B. 14
https://arxiv.org/abs/2505.21171v1
arXiv:2505.21180v1 [cs.LG] 27 May 2025Latent label distribution grid representation for modeling uncertainty ShuNing Sun University of the Chinese Academy of Sciences YinSong Xiong Nanjing University of Science and Technology Yu Zhang University of the Chinese Academy of SciencesZhuoran Zheng∗ Sun Yat-sen University Abstract Although LabelDistribution Learning (LDL) has promising representation capa- bilities for characterizing the polysemy of an instance, the complexity and high cost of the label distribution annotation lead to inexact in the construction of the label space. The existence of a large number of inexact labels generates a label space with uncertainty, which misleads the LDL algorithm to yield incorrect deci- sions. To alleviate this problem, we model the uncertainty of label distributions by constructing a Latent LabelDistribution Grid (LLDG) to form a low-noise representation space. Specifically, we first construct a label correlation matrix based on the differences between labels, and then expand each value of the matrix into a vector that obeys a Gaussian distribution, thus building a LLDG to model the uncertainty of the label space. Finally, the LLDG is reconstructed by the LLDG- Mixer to generate an accurate label distribution. Note that we enforce a customized low-rank scheme on this grid, which assumes that the label relations may be noisy and it needs to perform noise-reduction with the help of a Tucker reconstruction technique. Furthermore, we attempt to evaluate the effectiveness of the LLDG by considering its generation as an upstream task to achieve the classification of the objects. Extensive experimental results show that our approach performs competitively on several benchmarks, and the executable code and datasets are released in the Appendix. 1 Introduction Label distribution learning (LDL) is a novel machine learning paradigm that encodes an example through a descriptive degree distribution to convey rich semantics. Given that label distribution is a probabilistic vector representation with natural robustness in characterizing uncertainty, it has also been widely used in areas such as label noise [ 12,17], object recognition [ 10,19,22], and semantic understanding [23]. Although LDL can better represent uncertainty, few studies [ 11,25] have focused on the uncertainty problem in the LDL paradigm. In the process of LDL, the annotation of label distribution in the training data is more complex and has a high labeling cost, which can easily cause inaccuracy and noise interference in the label distribution data, thus reducing the accuracy of the learning algorithm. ∗Corresponding author, zhengzr@njust.edu.cn Preprint. Under review. Figure 1: LLDG vs. label distribution matrix (LDM) . This figure shows the representation patterns of LDM and LLDG, where LDM considers only the distribution of the values of the label distribution and LLDG considers the relational distribution of the values of the label distribution. To alleviate the above problem, we can start from two aspects: first, make full use of the correlation between labels, which can help correct the problem of inaccurate labeling of a single label; second, make further processing of the representation of the label distribution to make it more noise-resistant. For the first characteristic, a lot of work [ 4,7,8,14,15] has verified that label correlation can boost the performance
https://arxiv.org/abs/2505.21180v1
of LDL algorithms. For the second characteristic, Zheng and Jia [ 25] have preliminary explored this work by extending the label distribution vector to a label distribution matrix, where the value of each component of the label distribution is represented by a new vector satisfying a Gaussian distribution to represent its uncertainty. This construction of a two-dimensional distribution representation greatly improves the uncertainty representation capability of the LDL paradigm. Inspired by their work, we integrate the two characteristics of label correlation and label distribution extended representation and design a novel latent label distribution grid (LLDG) with stronger representational ability. Different from existing work, we do not make any prior assumptions about label correlation, and directly use the difference between labels to construct a label correlation matrix to preserve the raw information between labels to the greatest extent. In terms of label distribution expansion, instead of expanding the values in the label distribution vector directly, we expand the label correlation matrix constructed based on the label differences, and expand each difference in the matrix into a Gaussian distribution, thus constructing a representation of the grid. As we can see in Figure 1, from a one-dimensional label distribution vector to a two-dimensional label distribution matrix, and then to a three-dimensional label distribution grid, the ability of the model to characterize uncertainty gets stronger as the dimension increases, and subsequently, the model complexity increases gradually. Essentially, we only extend the data once in the form of distribution, but this extension incorporates the label correlation information, and the constructed grid representation has a stronger uncertainty representation capability without increasing the model complexity. In this paper, we propose a novel LDL method by incorporating LLDG into a deep learning network. Specifically, first, we need to build a feature extractor to serve the latent space (LLDG). This feature extractor involves a convolution operator and a self-attention mechanism with MLP. Here, the convolution operator is 1D convolution, which is employed to extract cross-information between neighboring features in tabular features. A self-attention mechanism with MLP is employed to conduct global modeling on tabular features. Next, the information with local and global (feature map) is reshaped to generate an LLDG. LLDG is endowed with a label relation in the label space to constrain the representation space to reach the effect of stable learning. In addition, to improve the stability of this grid representation space, a low-rank feature is enforced on the LLDG by leveraging the Tucker reconstruction algorithm. Finally, LLDG is fed into a network with an MLP by deformation, squeezing, etc. to yield an accurate label distribution. The above procedure is named latent label distribution grid representation, which has high-quality feature extraction and representation ability. To further demonstrate the LLDG modeling capability, we leverage LLDG to model the uncertainty of labeling relationships for a medical image classification task, followed by a simple classifier to discriminate between images. The main contributions of this paper are summarized as follows: 2 Figure 2: Our architecture . In general, our model has two learning targets, on the one hand, to learn the label distribution,
https://arxiv.org/abs/2505.21180v1
and on the other hand to learn the LLDG. Specifically, first, the input features are extracted by a local-global feature extractor to create a grid. LLDG establishes a labeled correlation space that is constrained by the Tucker reconstruction algorithm. Then, this grid is conducted in LLDG-Mixer to form a vector by squeezing. Finally, this vector is normalized by Softmax to form a label distribution. •We propose the latent label distribution grid representation with a strong representation capability, which introduces a label relationship to construct a stable learning space to regress an accurate label distribution. •To further enhance the stability of the learning space, we introduce a Tucker reconstruction algorithm that is enforced on the LLDG. In addition, we develop a learning network for LLDG that has local and global modeling ability. •Extensive experimental results demonstrate the optimal results of our method in terms of learning ability, noise tolerance, and learning cost. 2 Related Works This section introduces some works to evaluate the importance of our work, which we have divided into two parts to launch our proposed method. Label distribution learning. Currently, LDL plays a vital role in estimating a task’s uncertainty and thus boosting the model generalization capability. The LDL paradigm is built from an age estimation task [ 5]. Since then a large number of approaches have been proposed, such as low- rank hypothesis-based [ 9,15], metric-based [ 4], manifold-based [ 18,21], and label correlation- based [ 6,13,20]. Moreover, some approaches are implemented in computer vision [ 2,4,11,24], and speech recognition [ 16] tasks to boost the performance of classifiers. Recently, several approaches based on LDL start to tackle the label noise problem [12, 25]. Unfortunately, these methods ignore the uncertainty of the label space due to the noise caused by the environment and manual annotation. Recently, several studies [ 25,6] attempt to model the learning space of uncertainty in the label space. They sample the components of the label distribution to extend the representation space. Based on these, we integrate label extended and latent representation approaches to tackle LDL tasks. Label relationship modeling. Label relations are also widely considered in LDL tasks [ 4,7,8,14, 15]. Such relations are usually built by adding a ranking loss to the target function. For example, Shen and Jia [ 8] assume an ordinal relationship between the values of the label distributions, and for this purpose, an ordinal loss is introduced as an optimization objective. However, this explicit expression is usually less capable of learning than latent representation learning, due to the broader modeling space for representation learning [ 1]. In this paper, we attempt to build a label relation with constraints in the representation space. Overall, unlike previous work in the area of label distribution learning, we first model label relations as a representation space to learn an accurate label distribution. Furthermore, our approach can be extended to any classification task to enhance the robustness of the learner modeling. 3 3 Method As shown in Figure 2 of the framework, this section explains the details of LLDG and the generated LDL. Latent label
https://arxiv.org/abs/2505.21180v1
distribution grid representation involves notation definition, pipeline description, and loss function. Notation. LetXm×ndenote the feature matrix of the input (customized LDL datasets are all tabular data), where m denotes the number of samples and n denotes the feature space dimension. Let Lc denote the c-dimensional label. In LDL, to better describe the correspondence between samples and labels,Lis denoted as Li={dljxi, ..., dlcxi}, where dljxi(abbreviated as dj i) indicates the degree of the description of the j-th label for the i-th sample. For d, there are two constraints dj i∈[0,1]andP jdj i= 1to constrain Lito be a distribution. In addition, the LLDG is denoted as B, the difference matrix is denoted as D. The goal of LDL is to learn the feature of the sample and calculate the degree to which the label describes the sample. Local-global feature extractor. To build a high-quality representation space, we deliberately design a pipeline to generate latent representations (LLDG). Specifically, to build a qualified LLDG, we propose a CFormer as a feature extractor. First, each sample xican be deemed as a 1D signal or a sequence of tokens. Then, we use several 1D convolutional kernels ( Conv 1×1) to extract the features of the xi. Next, the long-range dependencies of xiare modeled using a standard Transformer [ 3]. Finally, the feature map computed by Transformer is deformed and squeezed (DS) to yield an c×c×c gridB. The whole workflow can be formulated as follows: B=Tanh(DS(Transformer (Conv 1×1(X)))),B∈(−1,1). (1) Note that the convolution kernels in these several 1D convolutional layers utilize an equivariant array {3, 5, 7} to capture the local features of the tabular data. Local-global feature extractor’s activation function uses PReLU (PReLU ∈(-∞, +∞)) to enforce the nonlinear ability in this network. In addition, Bis bounded by the Tanh function to avoid outliers due to stochasticity. Next, we describe the characteristics and optimization schemes of this grid B. Latent label distribution grid. LLDGBis a latent representation with high capacity and elemental correlation. To efficiently leverage LLDG B, we conduct two regularization terms on it. One of them is the label relation and the other is the low-rank characteristics. Specifically, we first construct a c×c difference matrix (symmetric matrix) Din the label space, i.e., the labels compute the difference aij between each other. D= a11a12··· a1c a21a22··· a2c ............ ac1ac2··· acc , aij=dj−di. Here, we also try the strategy of dividing between label values, but the grid is not built smoothly (the overall learning space is a multi-peaked saddle). Then, we create a grid ˆBfromD(2D−→3D). We useaijas the mean and 1−abs(aij)(the clearer the relationship between label values, the smaller the uncertainty) as the variance for Gaussian sampling Nto obtain a distribution of length c. Each element of the Dmatrix conducts this procedure to generate an c×c×cgridˆB. Finally, this ˆB serves as an a priori knowledge to bound Bduring training. The whole workflow can be written: min gB← − gˆB,ˆB← N (D), (2) g denotes the distance of minimizing two components, and here we use the L1 norm. Since Gaussian functions are inherently extremely uncertain, an LLDG regularization algorithm
https://arxiv.org/abs/2505.21180v1
with low-rank characteristics is introduced to alleviate the noise caused by sampling. Unlike algorithms such as SVD and PCA, since Bis a tensor, here we develop the Tucker reconstruction algorithm to enforce a low-rank characteristic on B. Specifically, the Tucker reconstruction algorithm is a two-stage approach, i.e., Tucker decomposition and recovery. First, given any third-order tensor Y ∈RM×N×T , if the rank of the Tucker decomposition is assumed to be ( R1, R2, R3), then for any element yijt, the expression of the Tucker decomposition is yijt≈R1X r1=1R2X r2=1R3X r3=1gr1r2r3uir1vjr2xtr3,∀(i, j, t), (3) 4 where the core tensor Ghas size R1×R2×R3and its elements are denoted as gr1r2r3,∀(r1, r2, r3); the first factor matrix Uhas size M×R1and its elements are denoted as uir1,∀(i, r1); the size of the second factor matrix UisM×R1, and its elements are written as uir1,∀(i, r1); the size of the second factor matrix VisN×R2, and its elements are written as vjr2,∀(j, r2); the size o fthe third factor matrix XisT×R3, and its elements are written as xtr3,∀(t, r3). The expression for Tucker’s decomposition can also be written Y≈G×1U×2V×3X, (4) where the symbol ×k, k= 1,2,3represents the product between tensors and matrices, which is also called the modal product. After that, the Tucker recovery process can be viewed as the inverse of the above process with the help of the obtained core tensor and factor matrices. This regularization algorithm with low-rank characteristics can be written as: B∗=Tucker[R] (Tucker[D] (B)), (5) where Tucker[D] and Tucker[R] represent the processes of decomposition and recovery, respectively. B∗denotes the tensor that has the low-rank characteristics enforced by the Tucker reconstruction. Generate label distribution. Since there is an indirect correlation between the labels, we attempt to conduct global modeling on B∗to regress an accurate label distribution. For this purpose, we develop a LLDG-Mixer that recreates a label distribution from B. During the reconstruction process, LLDG-Mixer first uses three linear layers (the number of neurons is c) to learn the x-axis, y-axis, and z-axis information in the grid B, respectively. Note that the dimensions of the grid Bare swapped between these three linear layers. Next, the grid Bis squeezed into the z-axis using an adding operator ( aoz) to yield a map. Then, this map is squeezed on the x-axis via the adding operator ( aox) to generate a vector. Finally, this vector is normalized by the Softmax function to generate the label distribution. LLDG-Mixer models long-range dependencies in a “rolling” fashion across the three dimensions of the grid, formalized as follows: L=softmax (aox(aoz(MLP(B)))), (6) where MLP represents three linear layers, LLDG-Mixer uses ReLU as the activation function. Loss function. The loss function (Total_loss) consists of two parts: grid loss Loss gand label distribution loss Loss d. We denote the predicted label distribution as ˆLand the real label distribution asL. Loss d=1 ccX i=1||ˆLi−Li||2, (7) Loss g=1 c3cX i,j,k=1||Bijk−ˆBijk)||2, (8) Total_loss =Loss d+λ×Loss g, (9) where λdenotes the weight, and here we use 0.5. In addition, we also try KL divergence and other regularization schemes, and they do not work significantly. Referring to the design of the
https://arxiv.org/abs/2505.21180v1
variational autoencoder, we try to conduct a standard normal distribution on Bto remove the instability caused by the variance, but its effectiveness is not satisfactory. 4 Experiment The purpose of this section is to evaluate the effectiveness of the LLDG. The experiments are organized into two main parts, one for evaluation using a customized LDL dataset, and the other for using LLDG as a representation space to conduct the classification task on a publicly available benchmark. In addition, we visualize the change of the LLDG with the enhancement of the label space noise, thus demonstrating the robustness of our method. Datasets. We conduct experiments on 15 LDL datasets to evaluate the model’s performance in terms of accuracy. These datasets include the biological, image (human face), and text (movie) domains. The SJAFEE dataset uses 243-dimensional features (a pre-trained ResNet18/50 is employed to extract features) to represent the facial characteristics of a Japanese female. For each sample, 60 experts score six emotions (happiness, sadness, surprise, fear, anger, and disgust), and the normalized average score 5 is used as the label distribution corresponding to the sample. The Movie dataset contains 7755 movie ratings on Netflix. The Movie dataset uses a 1,869-dimensional feature vector (a pre-trained BERT is employed to extract features) to represent a movie and a 5-dimensional label distribution as the learning target, where the label distribution comes from the audience’s evaluation of the movie. The Human Gene dataset has the most examples on the customized LDL dataset, each of which contains 36 features and the corresponding 68 diseases. wc-LDL is a dataset containing many watercolors with characteristics reported in [ 25]; SBU-3DFE is a face dataset made in the same process as the construction of SJAFFE. The remaining 12 datasets can be found at2. There are datasets summarized in Table 1. Index Data set Instances Features Labels 1 Yeast-alpha 2465 24 18 2 Yeast-cdc 2465 24 15 3 Yeast-cold 2465 24 4 4 Yeast-diau 2465 24 14 5 Yeast-dtt 2465 24 4 6 Yeast-elu 2465 24 14 7 Yeast-heat 2465 24 6 8 Yeast-spo 2465 24 6 9 Yeast-spoem 2465 24 2 10 Yeast-spo5 2465 24 3 11 SJAFFE 213 243 6 12 Human Gene 30542 36 68 13 Movie 7755 1869 5 14 wc-LDL 500 243 12 15 SBU-3DFE 2500 243 6 Table 1: The 15 datasets include detailed statis- tics of instances, features, and labels.Name Formula Value field Chebyshev ↓ maxj|dj−ˆdj| (0, 1) Clark↓rPL j=1(dj−ˆdj)2 (dj+ˆdj)2 (0, +∞) K-L↓PL j=1djlndj ˆdj(0, +∞) Canberra ↓PL j=1|dj−ˆdj| dj+ˆdj(0, +∞) Cosine ↑PL j=1d2 jˆdj2 rPL j=1d2 jrPL j=1ˆd2 j(0, 1) Intersection ↑PL j=1min(dj,ˆdj) (0, 1) Table 2: Evaluation metrics for LDL algorithms, where ↑and↓represents performance favorites. Evaluation measures. To objectively evaluate the performance of the algorithm, we use six metrics proposed by Geng [ 5], which are Chebyshev distance (Chebyshev ↓), KL distances (K-L ↓), Canberra distance (Canberra ↓), Clark distance (Clark ↓), Intersection similarity (Intersection ↑) and Cosine similarity (Cosine ↑). Their calculation formulas are as shown in Table 2, where ↓represents the smaller the better, and ↑represents the bigger the better. In
https://arxiv.org/abs/2505.21180v1
addition, we estimate the value domain of each metric conducting a pair of label distributions1to build an effective moat . Experiment settings. Seven LDL algorithms are compared with our method, including AA-KNN, AA-BP, IIS-LLD, BFGS-LLD, Duo-LDL, ENC-BFGS, and ENC-SDPT3. AA-BP and AA-KNN are two transformation algorithms. The three-layer MLP and KNN algorithms are modified to adapt to the LDL task. IIS-LLD and BFGS-LLD are methods with the same loss function but different optimization algorithms. In addition, the same is true for ENC-BFGS and ENC-SDPT3. Duo-LDL is an improvement of the AA-BP method. Although it is also a three-layer MLP structure, the output dimension of the last layer has changed from c to c ×(c-1), and the learning target has also changed from label to label difference. Except for wc-LDL, the evaluation results of all algorithms are extracted from the existing paper (all data sets are divided into training and test sets in a ratio of 8:2). For our approach, we use the PyTorch 1.7 platform, an RTX3090 GPU; the AdamW optimizer is deployed on all datasets to learn the weights of the model with a learning rate of 0.001. To evaluate the soundness of the design of the optimization objective, we use wandb3to estimate the importance of the loss function term. The results show that the average contribution of Loss dandLoss gis 82.3% and 17.7% respectively across the 15 data sets. Results. As shown in Table 3 and 4of experimental results, the best results are marked in bold. Since ENC-SDPT3 and ENC-BFGS do not publish their codes, there are no Clark metrics for the two methods. It is worth comparing LLDG, AA-BP, and Duo-LDL because all three are neural network-based algorithms. As can be seen from the table, LLDG outperforms both on most datasets. Furthermore, it can be seen from Table 4 that ENC-SDPT3 and ENC-BFGS achieve comparable performance to our method on Movie and SJAFFE. This may be due to interfering features in these 2http://palm.seu.edu.cn/xgeng/LDL/index.htm 1Predicting label distribution and ground truth, each label distribution strictly follows the definition of a label distribution (the sum of the label distribution values is 1 and each label distribution value belongs to [0, 1]). 3https://github.com/wandb/wandb 6 Dataset Algorithm Chebyshev ↓ Clark↓ Canberra ↓ K-L↓ Cosine↑ Intersection ↑ Yeast-alphaAA-KNN 0.0147±0.0008 0.2320 ±0.0120 0.7582 ±0.0289 0.0064 ±0.0004 0.9938 ±0.0003 0.9581 ±0.0021 AA-BP 0.0401±0.0022 0.7110 ±0.0540 2.3521 ±0.1282 0.0771 ±0.0084 0.9391 ±0.0059 0.8772 ±0.0081 IIS-LLD 0.0156±0.0004 0.2330 ±0.0120 0.7625 ±0.0351 0.0075 ±0.0003 0.9927 ±0.0003 0.9578 ±0.0021 Duo-LDL 0.0160±0.0010 0.2100 ±0.0120 0.6813 ±0.0186 0.0056 ±0.0004 0.9946 ±0.0003 0.9624 ±0.0009 BFGS-LLD 0.0135±0.0003 0.2100 ±0.0140 0.6845 ±0.0170 0.0063 ±0.0003 0.9943 ±0.0003 0.9621 ±0.0009 ENC-BFGS 0.0134±0.0004 - 0.6812 ±0.0174 0.0056 ±0.0004 0.9946 ±0.0003 0.9624 ±0.0009 ENC-SDPT3 0.0134±0.0004 - 0.6813 ±0.0171 0.0056 ±0.0004 0.9946 ±0.0003 0.9624 ±0.0009 Ours 0.0134±0.0004 0.2082 ±0.0054 0.6767 ±0.1680 0.0054 ±0.0003 0.9947 ±0.0003 0.9626 ±0.0009 Yeast-cdcAA-KNN 0.0173±0.0004 0.2370 ±0.0140 0.7172 ±0.0215 0.0083 ±0.0005 0.9924 ±0.0004 0.9528 ±0.0028 AA-BP 0.0410±0.0022 0.5680 ±0.0370 1.7152 ±0.1055 0.0608 ±0.0065 0.9508 ±0.0047 0.8912 ±0.0051 IIS-LLD 0.0184±0.0005 0.2350 ±0.0120 0.7086 ±0.0215 0.0092 ±0.0005 0.9915 ±0.0004 0.9532 ±0.0021 Duo-LDL 0.0165±0.0007 0.2160 ±0.0013 0.6462 ±0.0180 0.0074 ±0.0005
https://arxiv.org/abs/2505.21180v1
0.9933 ±0.0003 0.9575 ±0.0010 BFGS-LLD 0.0164±0.0008 0.2160 ±0.0130 0.6487 ±0.0161 0.0074 ±0.0005 0.9928 ±0.0004 0.9574 ±0.0010 ENC-BFGS 0.0162±0.0005 - 0.6461 ±0.0173 0.0074 ±0.0005 0.9933 ±0.0003 0.9575 ±0.0010 ENC-SDPT3 0.0162±0.0005 - 0.6466 ±0.0157 0.0074 ±0.0005 0.9933 ±0.0003 0.9575 ±0.0010 Ours 0.1587±0.0004 0.2130 ±0.0044 0.6445 ±0.0089 0.0068 ±0.0002 0.9934 ±0.0003 0.9576 ±0.0006 Yeast-coldAA-KNN 0.0542±0.0017 0.1500 ±0.0070 0.2604 ±0.0102 0.0142 ±0.0018 0.9872 ±0.0008 0.9362 ±0.0024 AA-BP 0.0598±0.0031 0.1550 ±0.0090 0.2681 ±0.0143 0.0174 ±0.0026 0.9844 ±0.0016 0.9340 ±0.0032 IIS-LLD 0.0545±0.0017 0.1440 ±0.0050 0.2487 ±0.0091 0.0144 ±0.0020 0.9871 ±0.0009 0.9376 ±0.0022 Duo-LDL 0.0512±0.0015 0.1410 ±0.0050 0.2408 ±0.0092 0.0129 ±0.0020 0.9886±0.0008 0.9409±0.0019 BFGS-LLD 0.0513±0.0016 0.1390 ±0.0050 0.2402 ±0.0090 0.0130 ±0.0014 0.9880 ±0.0007 0.9408 ±0.0019 ENC-BFGS 0.0510±0.0018 - 0.2398 ±0.0089 0.0129 ±0.0022 0.9886±0.0008 0.9409±0.0021 ENC-SDPT3 0.0510±0.0017 - 0.2398 ±0.0080 0.0129 ±0.0018 0.9886±0.0007 0.9409±0.0019 Ours 0.0508±0.0021 0.1385 ±0.0054 0.2386 ±0.0090 0.0121 ±0.0010 0.9886 ±0.0008 0.9411 ±0.0021 Yeast-diauAA-KNN 0.0385±0.0012 0.2120 ±0.0040 0.4551 ±0.0112 0.0151 ±0.0012 0.9866 ±0.0008 0.9371 ±0.0021 AA-BP 0.0531±0.0053 0.2630 ±0.0170 0.5675 ±0.0310 0.0299 ±0.0069 0.9742 ±0.0053 0.9224 ±0.0039 IIS-LLD 0 .0397±0.0011 0.2090 ±0.0070 0.4487 ±0.0170 0.0159 ±0.0011 0.9861 ±0.0007 0.9381 ±0.0020 Duo-LDL 0.0370±0.0012 0.2020 ±0.0060 0.4331 ±0.0100 0.0138 ±0.0010 0.9879 ±0.0007 0.9402 ±0.0017 BFGS-LLD 0.0374±0.0011 0.2000±0.0090 0.4312±0.0103 0.0139 ±0.0012 0.9869 ±0.0007 0.9403±0.0017 ENC-BFGS 0.0370±0.0012 - 0.4312 ±0.0140 0.0140 ±0.0011 0.9879 ±0.0007 0.9402 ±0.0017 ENC-SDPT3 0.0370±0.0012 - 0.4311 ±0.0129 0.0140 ±0.0011 0.9879 ±0.0007 0.9403±0.0017 Ours 0.0368±0.0010 0.2000 ±0.0045 0.4307 ±0.0109 0.0130 ±0.0006 0.9880 ±0.0005 0.9402±0.0015 Yeast-dttAA-KNN 0.0385±0.0013 0.1060 ±0.0040 0.1821 ±0.0071 0.0076 ±0.0016 0.9933 ±0.0005 0.9549 ±0.0021 AA-BP 0.0470±0.0042 0.1180 ±0.0070 0.2043 ±0.0118 0.0114 ±0.0027 0.9898 ±0.0021 0.9502 ±0.0021 IIS-LLD 0.0406±0.0014 0.1050 ±0.0040 0.1812 ±0.0051 0.0084 ±0.0016 0.9926 ±0.0005 0.9552 ±0.0016 Duo-LDL 0.0361±0.0012 0.0980 ±0.0039 0.1689 ±0.0060 0.0068 ±0.0013 0.9941 ±0.0004 0.9582 ±0.0014 BFGS-LLD 0.0360±0.0013 0.0980 ±0.0040 0.1690 ±0.0062 0.0071 ±0.0015 0.9940 ±0.0005 0.9583 ±0.0015 ENC-BFGS 0.0359±0.0012 - 0.1687 ±0.0065 0.0069 ±0.0015 0.9941 ±0.0004 0.9584 ±0.0014 ENC-SDPT3 0.0359±0.0012 - 0.1686 ±0.0062 0.0069 ±0.0015 0.9941 ±0.0005 0.9584 ±0.0014 Ours 0.0356±0.0014 0.0967 ±0.0043 0.1680 ±0.0077 0.0059 ±0.0006 0.9942 ±0.0005 0.9586 ±0.0019 Yeast-eluAA-KNN 0.0173±0.0004 0.2180 ±0.0050 0.6442 ±0.0143 0.0073 ±0.0004 0.9931 ±0.0002 0.9546 ±0.0011 AA-BP 0.0409±0.0023 0.5050 ±0.0340 1.4885 ±0.0672 0.0540 ±0.0058 0.9557 ±0.0042 0.8992 ±0.0054 IIS-LLD 0.0186±0.0004 0.2160 ±0.0070 0.6387 ±0.0193 0.0083 ±0.0004 0.9922 ±0.0003 0.9547 ±0.0011 Duo-LDL 0.0164±0.0005 0.2110 ±0.0059 0.5853 ±0.0115 0.0065 ±0.0004 0.9940 ±0.0002 0.9591 ±0.0010 BFGS-LLD 0.0165±0.0004 0.1990 ±0.0050 0.5831 ±0.0142 0.0066 ±0.0005 0.9940 ±0.0003 0.9589 ±0.0009 ENC-BFGS 0.0163±0.0004 - 0.5823 ±0.0128 0.0064 ±0.0004 0.9941 ±0.0002 0.9589 ±0.0015 ENC-SDPT3 0.0163±0.0004 - 0.5825 ±0.0130 0.0064 ±0.0004 0.9940 ±0.0003 0.9589 ±0.0009 Ours 0.0160±0.0004 0.1965 ±0.0036 0.5801 ±0.0070 0.0061 ±0.0002 0.9942 ±0.0002 0.9592 ±0.0007 Yeast-heatAA-KNN 0.0441±0.0012 0.1950 ±0.0050 0.3918 ±0.0112 0.0145 ±0.0011 0.9867 ±0.0006 0.9362 ±0.0023 AA-BP 0.0534±0.0035 0.2280 ±0.0150 0.4589 ±0.0286 0.0244 ±0.0038 0.9782 ±0.0030 0.9251 ±0.0054 IIS-LLD 0.0495±0.0013 0.1880 ±0.0030 0.3772 ±0.0068 0.0156 ±0.0012 0.9857 ±0.0007 0.9384 ±0.0011 Duo-LDL 0.0422±0.0013 0.1831 ±0.0031 0.3646 ±0.0100 0.0133 ±0.0013 0.9880±0.0006 0.9406 ±0.0014 BFGS-LLD 0.0425±0.0013 0.1820 ±0.0030 0.3642 ±0.0072 0.0135 ±0.0012 0.9878 ±0.0006 0.9402 ±0.0016 ENC-BFGS 0.0422±0.0012 - 0.3642 ±0.0090 0.0133 ±0.0011 0.9880±0.0006 0.9402±0.0014 ENC-SDPT3 0.0422±0.0012 - 0.3640 ±0.0098 0.0133 ±0.0010 0.9880±0.0006 0.9403±0.0016 Ours 0.0421±0.0010 0.1819 ±0.0040 0.3631 ±0.0076 0.0129 ±0.0009 0.9880 ±0.0008 0.9406 ±0.0020 Yeast-spoAA-KNN 0.0627±0.0023 0.2710 ±0.0110 0.5597 ±0.0218 0.0303
https://arxiv.org/abs/2505.21180v1
±0.0021 0.9730 ±0.0017 0.9082 ±0.0043 AA-BP 0.0684±0.0031 0.2920 ±0.0220 0.5992 ±0.0417 0.0368 ±0.0037 0.9679 ±0.0029 0.9022 ±0.0069 IIS-LLD 0.0605±0.0018 0.2550 ±0.0170 0.5231 ±0.0312 0.0281 ±0.0019 0.9753 ±0.0013 0.9143 ±0.0052 Duo-LDL 0.0585±0.0020 0.2530 ±0.0168 0.5137 ±0.0135 0.0258 ±0.0017 0.9770 ±0.0012 0.9156 ±0.0023 BFGS-LLD 0.0586±0.0021 0.2500 ±0.0170 0.5133 ±0.0145 0.0265 ±0.0018 0.9768 ±0.0013 0.9155 ±0.0023 ENC-BFGS 0.0583±0.0018 - 0.5127 ±0.0155 0.0263 ±0.0017 0.9770 ±0.0012 0.9156 ±0.0023 ENC-SDPT3 0.0583±0.0018 - 0.5126 ±0.0144 0.0263 ±0.0018 0.9770 ±0.0012 0.9156 ±0.0023 Ours 0.0582±0.0012 0.2489 ±0.0062 0.5124 ±0.0164 0.0246 ±0.0013 0.9771 ±0.0012 0.9160 ±0.0023 Yeast-spoemAA-KNN 0.0904±0.0047 0.1370 ±0.0060 0.1914 ±0.0089 0.0291 ±0.0037 0.9764 ±0.0023 0.9072 ±0.0043 AA-BP 0.0892±0.0049 0.1323 ±0.0080 0.1842 ±0.0108 0.0283 ±0.0034 0.9778 ±0.0034 0.9108 ±0.0056 IIS-LLD 0.0905±0.0036 0.1321 ±0.0070 0.1840 ±0.0099 0.0291 ±0.0035 0.9774 ±0.0015 0.9109 ±0.0054 Duo-LDL 0.0871±0.0037 0.1290 ±0.0080 0.1812 ±0.0072 0.0255 ±0.0030 0.9790 ±0.0015 0.9128 ±0.0058 BFGS-LLD 0.0870±0.0037 0.1290 ±0.0080 0.1799 ±0.0082 0.0270 ±0.0035 0.9790 ±0.0015 0.9131 ±0.0038 ENC-BFGS 0.0873±0.0037 - 0.1808 ±0.0092 0.0273 ±0.0037 0.9788 ±0.0016 0.9127 ±0.0042 ENC-SDPT3 0.0874±0.0037 - 0.1808 ±0.0082 0.0273 ±0.0038 0.9788 ±0.0016 0.9126 ±0.0037 Ours 0.0865±0.0055 0.1278 ±0.0090 0.1780 ±0.0096 0.0250 ±0.0027 0.9791 ±0.0017 0.9150 ±0.0038 Table 3: Experimental results on 9 datasets and the best results are bolded. two datasets with large feature dimensions, making these two feature reconstruction-based methods also able to play their roles. Ablation study. To verify the role of LLDG, we conduct ablation experiments on the Gene dataset. Specifically, we modify the output dimension of LLDG-Mixer and remove the Loss gfrom the loss function. In addition, to demonstrate the effectiveness of the Tucker reconstruction algorithm, we 7 Dataset Algorithm Chebyshev ↓ Clark↓ Canberra ↓ K-L↓ Cosine↑ Intersection ↑ Yeast-spo5AA-KNN 0.0948±0.0036 0.1930 ±0.0110 0.2969 ±0.0146 0.0343 ±0.0031 0.9713 ±0.0022 0.9044 ±0.0051 AA-BP 0.0949±0.0036 0.1890 ±0.0120 0.2912 ±0.0170 0.0339 ±0.0032 0.9723 ±0.0019 0.9062 ±0.0054 IIS-LLD 0.0931±0.0037 0.1870 ±0.0130 0.2871 ±0.0191 0.0330 ±0.0032 0.9731 ±0.0019 0.9072 ±0.0034 Duo-LDL 0.0913±0.0033 0.1840 ±0.0124 0.2821 ±0.0100 0.0293 ±0.0022 0.9741 ±0.0018 0.9088 ±0.0033 BFGS-LLD 0.0914±0.0040 0.1840 ±0.0120 0.2829 ±0.0101 0.0324 ±0.0031 0.9741 ±0.0016 0.9086 ±0.0031 ENC-BFGS 0.0912±0.0038 - 0.2823 ±0.0115 0.0322 ±0.0034 0.9741 ±0.0018 0.9088 ±0.0034 ENC-SDPT3 0.0912±0.0038 - 0.2824 ±0.0104 0.0322 ±0.0034 0.9741 ±0.0016 0.9088 ±0.0033 Ours 0.0906±0.0028 0.1828 ±0.0058 0.2814 ±0.0070 0.0286 ±0.0024 0.9742 ±0.0019 0.9089 ±0.0036 SJAFFEAA-KNN 0.1141±0.0108 0.4100 ±0.0500 0.8431 ±0.1131 0.0712 ±0.0231 0.9337 ±0.0182 0.8552 ±0.0215 AA-BP 0.1272±0.0126 0.5100 ±0.0540 1.0462 ±0.1250 0.0960 ±0.0183 0.9145 ±0.0140 0.8243 ±0.0216 IIS-LLD 0.1194±0.0130 0.4190 ±0.0340 0.8751 ±0.0842 0.0700 ±0.0089 0.9314 ±0.0104 0.8513 ±0.0147 Duo-LDL 0.1291±0.0120 0.5080 ±0.0350 0.8142 ±0.0700 0.1061 ±0.0112 0.9100 ±0.0100 0.8310 ±0.0123 BFGS-LLD 0.1142±0.0132 0.3990 ±0.0440 0.8202 ±0.0675 0.0740 ±0.0135 0.9301 ±0.0121 0.8606 ±0.0121 ENC-BFGS 0.0956±0.0103 - 0.7108±0.0553 0.0500±0.0090 0.9531±0.0086 0.8797 ±0.0101 ENC-SDPT3 0.0959±0.0103 - 0.7115 ±0.0612 0.0500 ±0.0090 0.9530 ±0.0090 0.8797±0.0108 Ours 0.0933±0.0067 0.3545 ±0.0018 0.7193±0.0455 0.0494±0.0055 0.9529±0.0053 0.8789 ±0.0082 Human geneAA-KNN 0.0648±0.0019 2.3880 ±0.1090 16.2832 ±0.8072 0.3010 ±0.0084 0.7687 ±0.0046 0.7433 ±0.0128 AA-BP 0.0624±0.0019 3.3440 ±0.2500 22.7847 ±1.8523 0.4691 ±0.0169 0.6906 ±0.0087 0.6712 ±0.0221 IIS-LLD 0.0534±0.0016 2.1230 ±0.0880 14.5412 ±0.6534 0.2440 ±0.0035 0.8334 ±0.0040 0.7828 ±0.0098 Duo-LDL 0.0534±0.0007 2.1100 ±0.0880 14.4423 ±0.2176 0.2358 ±0.0110 0.8345 ±0.0020 0.7852 ±0.0042 BFGS-LLD 0.0534±0.0018 2.1110 ±0.0860 14.4532 ±0.2207 0.2480 ±0.0015 0.8342 ±0.0039 0.7846 ±0.0034 ENC-BFGS 0.0534±0.0018
https://arxiv.org/abs/2505.21180v1
- 14.4543 ±0.2282 0.2264 ±0.0072 0.8342 ±0.0039 0.7846 ±0.0028 ENC-SDPT3 0.0533±0.0018 - 14.4543 ±0.2282 0.2262±0.0072 0.8345±0.0039 0.7842 ±0.0034 Ours 0.0522±0.0011 2.1008 ±0.0252 14.3775 ±0.1721 0.2313±0.0054 0.8368±0.0027 0.7856 ±0.0024 MovieAA-KNN 0.1542±0.0048 0.6520 ±0.0230 1.2758 ±0.0457 0.2008 ±0.0102 0.8802 ±0.0026 0.7801 ±0.0056 AA-BP 0.1572±0.0024 0.6750 ±0.0480 1.2693 ±0.0872 0.1792 ±0.0246 0.8948 ±0.0012 0.7882 ±0.0112 IIS-LLD 0.1508±0.0016 0.5910 ±0.0280 1.1367 ±0.0542 0.1368 ±0.0121 0.9067 ±0.0023 0.8004 ±0.0100 Duo-LDL 0.1240±0.0032 0.5750 ±0.0290 1.0772 ±0.0201 0.1131 ±0.0625 0.9264 ±0.0032 0.8221 ±0.0040 BFGS-LLD 0.1355±0.0018 0.5890 ±0.0380 1.0617 ±0.0173 0.1292 ±0.0056 0.9231 ±0.0028 0.8192 ±0.0054 ENC-BFGS 0.1199±0.0256 - 1.0345 ±0.0195 0.1210 ±0.0049 0.9298 ±0.0027 0.8282 ±0.0034 ENC-SDPT3 0.1197±0.0024 - 1.0337±0.0175 0.1209±0.0049 0.9299±0.0032 0.8284 ±0.0032 Ours 0.1196±0.0016 0.5735 ±0.0045 1.0339±0.0280 0.1074±0.0057 0.9290±0.0035 0.8258 ±0.0042 wc-LDLAA-KNN 0.1122±0.0039 1.5657 ±0.0021 0.7998 ±0.0020 0.0498 ±0.0051 0.9704 ±0.0036 0.8611 ±0.0016 AA-BP 0.0989±0.0019 0.6689 ±0.0019 0.8089 ±0.0049 0.0477 ±0.0018 0.9476 ±0.0020 0.8700 ±0.0033 IIS-LLD 0.1009±0.0038 0.4199 ±0.0044 0.9008 ±0.0015 0.0519 ±0.0040 0.9779 ±0.0018 0.8660 ±0.0022 Duo-LDL 0.1057±0.0019 1.0569 ±0.0039 0.7890 ±0.0039 0.0545 ±0.0037 0.9668 ±0.0049 0.8383 ±0.0018 BFGS-LLD 0.0923±0.0030 0.4212 ±0.0036 0.8135 ±0.0024 0.0511 ±0.0049 0.9718 ±0.0022 0.8669 ±0.0047 Ours 0.0745±0.0012 0.3684 ±0.0055 0.7660 ±0.0033 0.0419 ±0.0008 0.9897 ±0.0009 0.8819 ±0.0014 SBU-3DFEAA-KNN 0.1119±0.0030 1.4657 ±0.0022 0.7700 ±0.0025 0.0492 ±0.0053 0.9753 ±0.0036 0.8710 ±0.0019 AA-BP 0.0899±0.0021 0.6563 ±0.0019 0.8132 ±0.0100 0.0468 ±0.0021 0.9441 ±0.0011 0.8723 ±0.0034 IIS-LLD 0.1009±0.0038 0.4199 ±0.0044 0.9008 ±0.0015 0.0519 ±0.0040 0.9780 ±0.0029 0.8660 ±0.0022 Duo-LDL 0.1009±0.0038 0.4199 ±0.0044 0.9008 ±0.0015 0.0519 ±0.0040 0.9780 ±0.0029 0.8660 ±0.0022 BFGS-LLD 0.1100±0.0025 0.9660 ±0.0039 0.7897 ±0.0033 0.0511 ±0.0021 0.9677 ±0.0056 0.8555 ±0.0032 Ours 0.0811±0.0023 0.3987 ±0.0024 0.7533 ±0.0027 0.0354 ±0.0031 0.9888 ±0.0066 0.8997 ±0.0030 Table 4: Experimental results on 6 datasets and the best results are bolded. just remove the Tucker reconstruction algorithm using a raw LLDG. The experimental results are summarized in Table 4, showing the effectiveness of Grid. Measures w/ LLDG w/o LLDG w/o Tucker Chebyshev ↓ 0.0522±0.0011 0.0524±0.0009 0.0529±0.0011 Clark↓ 2.1008±0.0252 2.1189±0.0177 2.1180±0.0172 K-L↓ 0.2313±0.0054 0.2333±0.0035 0.2334±0.0039 Canberra ↓ 14.3775 ±0.1721 14.4975 ±0.1331 14.4905 ±0.1202 Cosine ↑ 0.8368±0.0027 0.8346±0.0019 0.8349±0.0008 Intersection ↑ 0.7856±0.0024 0.7840±0.0017 0.7842±0.0019 Table 5: Results on ablation studies. This table demonstrates the role of LLDG, and it is worth noting that the Tucker reconstruction algorithm has significant benefits in boosting the performance of the algorithm. More ablation results are shown in the Appendix. Noise disturbance. To verify the robustness of LLDG, we test the model with different degrees of Gaussian noise. The standard deviation of the noise is {0.1, 0.2, 0.5, 1.0}, the mean is 0, and the experimental results are shown in Table 4. Although the model’s performance continues to decline as the standard deviation increases, it is still competitive compared to other methods. LLDG shows that the model is robust to noise. Potential of LLDG. Our model can be extended to handle classification. For every single label, by modeling the Gaussian prior is expanded into a vector shape as a learning target for the LLDG. We conduct experiment reports to show the potential of the public dataset. We evaluate the algorithm’s performance on the MedMNIST Classification Decathlon benchmark. The area under the ROC curve (AUC) and Accuracy (ACC)
https://arxiv.org/abs/2505.21180v1
is used as the evaluation metrics. Our model is trained for 100 epochs, 8 Measures 0.1 0.2 0.5 1.0 Chebyshev ↓ 0.0524±0.0017 0.0528±0.0019 0.0526±0.0012 0.0531±0.0015 Clark↓ 2.1102±0.0194 2.1113±0.0120 2.1214±0.0202 2.1243±0.0202 K-L↓ 0.2330±0.0070 0.2335±0.00755 0.2359±0.0064 0.2362±0.0060 Canberra ↓ 14.4344 ±0.01591 14.4346 ±0.01967 14.5237 ±0.01783 14.5489 ±0.1715 Cosine ↑ 0.8360±0.0037 0.8353±0.0013 0.8355±0.0030 0.8338±0.0036 Intersection ↑ 0.7852±0.0025 0.7849±0.0020 0.7841±0.0028 0.7833±0.0027 Table 6: Results on noise interference. Statistically, LLDG shows an outstanding performance even when a Gaussian noise with a variance of 1 is added to the label distribution space. using a cross-entropy loss and an AdamW optimizer with a batch size of 128 and an initial learning rate of 1×10−3. The overall performance of the methods is reported in Table 7. Table 7 shows that our method achieves competitive results. MethodsPathMNIST ChestMNIST DermaMNIST OCTMNIST PneumoniaMNIST AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC ResNet-18 (28) 0.972 0.844 0.706 0.947 0.899 0.721 0.951 0.758 0.957 0.843 ResNet-18 (224) 0.978 0.860 0.713 0.948 0.896 0.727 0.960 0.752 0.970 0.861 ResNet-50 (28) 0.979 0.864 0.692 0.947 0.886 0.710 0.939 0.745 0.949 0.857 ResNet-50 (224) 0.978 0.848 0.706 0.947 0.895 0.719 0.951 0.750 0.968 0.896 auto-sklearn 0.500 0.186 0.647 0.642 0.906 0.734 0.883 0.595 0.947 0.865 AutoKeras 0.979 0.864 0.715 0.939 0.921 0.756 0.956 0.736 0.970 0.918 Google AutoML Vision 0.982 0.811 0.718 0.947 0.925 0.766 0.965 0.732 0.993 0.941 Ours 0.986 0.877 0.723 0.949 0.933 0.765 0.969 0.756 0.993 0.949 MethodsRetinaMNIST BreastMNIST OrganMNIST_A OrganMNIST_C OrganMNIST_S AUC ACC AUC ACC AUC ACC AUC ACC AUC ACC ResNet-18 (28) 0.727 0.515 0.897 0.859 0.995 0.921 0.990 0.889 0.967 0.762 ResNet-18 (224) 0.721 0.543 0.915 0.878 0.997 0.931 0.991 0.907 0.974 0.777 ResNet-50 (28) 0.719 0.490 0.879 0.853 0.995 0.916 0.990 0.893 0.968 0.746 ResNet-50 (224) 0.717 0.555 0.863 0.833 0.997 0.931 0.992 0.898 0.970 0.770 auto-sklearn 0.694 0.525 0.848 0.808 0.797 0.563 0.898 0.676 0.855 0.601 AutoKeras 0.655 0.420 0.833 0.801 0.996 0.929 0.992 0.915 0.972 0.803 Google AutoML Vision 0.762 0.530 0.932 0.865 0.988 0.818 0.986 0.861 0.964 0.706 Ours 0.761 0.572 0.931 0.893 0.996 0.934 0.990 0.919 0.972 0.823 Table 7: Overall performance of MedMNIST in metrics of AUC and ACC, using ResNet-18 / ResNet-50 with resolution 28and224, auto-sklearn , AutoKeras and Google AutoML Vision. Visualization and analysis. To demonstrate the anti-noise ability of the model more intuitively, we visualize the impact of different noise levels on the LLDG (our method is conducted on Yest-dtt, where the size of the grid is a 43), as shown in the Figure 3. It is known from the observation that LLDG still has a nearly consistent representation space under different levels of noise interference. In addition, we statistically measure the distribution of the parameters of the network generating LLDG, and the results show that the parameters of the network show a standard normal distribution. In contrast, the parameter distribution of the model without LLDG is a Gaussian distribution with a mean of 0.2. Overall, the strong robustness of our method comes from two aspects: 1) the parameter space of the model is relatively stable, and 2) the Tucker reconstruction
https://arxiv.org/abs/2505.21180v1
technique eliminates the noise. 5 Limitations and Broad Impact Although LLDG has good potential for uncertainty modeling, there are two limitations of our approach. a) A large number of computing resources are consumed: when the dimensionality of the label distribution space is large (Human Gene dataset), our method takes 4 hours to train an epoch on a single RTX3090 GPU. b) For an arbitrary dataset of classification tasks, logical labels or integer-type labels require empirical construction into a label distribution, which often needs a lot of work for trial and error. In this paper, the datasets involved are publicly released by the researchers, and the LLDG algorithm does not touch on the issue of personal privacy and security. 9 Figure 3: This figure shows the energy distribution of the LLDG (4), and the blue boxes indicate the regions where the energy varies with the enhancement of Gaussian noise; in general, the energy of the LLDG does not vary significantly with the increase of Gaussian noise. 6 Conclusion In this paper, we introduce a latent representation with uncertainty (LLDG) to address the problem of uncertainty in the label space of an arbitrary dataset. The effectiveness of LLDG modeling comes from the uncertainty based on Gaussian sampling and the regularization characteristics of the Tucker reconstruction technique. Numerous experiments validate that LLDG shows amazing potential in noise-inclusive, noise-free LDL datasets. In addition, our method is extended to a classification task still showing competitive results. 10 References [1]Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE TPAMI , 2013. 3 [2]Jingying Chen, Chen Guo, Ruyi Xu, Kun Zhang, Zongkai Yang, and Honghai Liu. Toward children’s empathy ability analysis: Joint facial expression recognition and intensity estimation using label distribution learning. IEEE Transactions on Industrial Informatics , 2021. 3 [3]Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In ICLR , 2021. 4 [4]Bin-Bin Gao, Hong-Yu Zhou, Jianxin Wu, and Xin Geng. Age estimation using expectation of label distribution learning. In IJCAI , 2018. 2, 3 [5] Xin Geng. Label distribution learning. IEEE TKDE , 2016. 3, 6 [6]Qimeng Guo, Zhuoran Zheng, Xiuyi Jia, and Liancheng Xu. Label distribution learning via label correlation grid. arXiv preprint arXiv:2210.08184 , 2022. 3 [7]Xiuyi Jia, Zechao Li, Xiang Zheng, Weiwei Li, and Sheng-Jun Huang. Label distribution learning with label correlations on local samples. IEEE TKDE , 2019. 2, 3 [8]Xiuyi Jia, Xiaoxia Shen, Weiwei Li, Yunan Lu, and Jihua Zhu. Label distribution learning by maintaining label ranking relation. IEEE TKDE , 2021. 2, 3 [9]Xiuyi Jia, Xiang Zheng, Weiwei Li, Changqing Zhang, and Zechao Li. Facial emotion distribu- tion learning by exploiting low-rank label correlations locally. In CVPR , 2019. 3 [10] Nhat Le, Khanh Nguyen, Quang Tran, Erman Tjiputra, Bac Le, and Anh Nguyen. Uncertainty- aware label distribution learning for facial expression recognition. In WACV , 2023. 1 [11] Qiang Li, Jingjing Wang, Zhaoliang Yao, Yachun Li, Pengju Yang,
https://arxiv.org/abs/2505.21180v1
Jingwei Yan, Chunmao Wang, and Shiliang Pu. Unimodal-concentrated loss: Fully adaptive label distribution learning for ordinal regression. In CVPR , 2022. 1, 3 [12] Weiwei Li, Yuqing Lu, Lei Chen, and Xiuyi Jia. Label distribution learning with noisy labels via three-way decisions. International Journal of Approximate Reasoning , 2022. 1, 3 [13] Wenbin Qian, Yinsong Xiong, Jun Yang, and Wenhao Shu. Feature selection for label dis- tribution learning via feature similarity and label correlation. Information Sciences , 2022. 3 [14] Tingting Ren, Xiuyi Jia, Weiwei Li, Lei Chen, and Zechao Li. Label distribution learning with label-specific features. In IJCAI , 2019. 2, 3 [15] Tingting Ren, Xiuyi Jia, Weiwei Li, and Shu Zhao. Label distribution learning with label correlations via low-rank approximation. In IJCAI , 2019. 2, 3 [16] Shijing Si, Jianzong Wang, Junqing Peng, and Jing Xiao. Towards speaker age estimation with label distribution learning. In ICASSP , 2022. 3 [17] Zeren Sun, Huafeng Liu, Qiong Wang, Tianfei Zhou, Qi Wu, and Zhenmin Tang. Co-ldl: A co-training-based label distribution learning method for tackling label noise. IEEE Transactions on Multimedia , 2021. 1 [18] Chao Tan, Sheng Chen, Xin Geng, and Genlin Ji. A label distribution manifold learning algorithm. PR, 2022. 3 [19] Chao Tan, Sheng Chen, Xin Geng, and Genlin Ji. A label distribution manifold learning algorithm. PR, 2023. 1 [20] Qifa Teng and Xiuyi Jia. Incomplete label distribution learning by exploiting global sample correlation. In Multimedia Understanding with Less Labeling on Multimedia Understanding with Less Labeling . 2021. 3 11 [21] Jing Wang and Xin Geng. Label distribution learning by exploiting label distribution manifold. 2021. 3 [22] Jun Wang, Fengyexin Zhang, Xiuyi Jia, Xin Wang, Han Zhang, Shihui Ying, Qian Wang, Jun Shi, and Dinggang Shen. Multi-class ASD classification via label distribution learning with class-shared and class-specific decomposition. Medical Image Analysis , 2022. 1 [23] Ning Xu, Yun-Peng Liu, and Xin Geng. Label enhancement for label distribution learning. IEEE TKDE , 2019. 1 [24] Zengqun Zhao, Qingshan Liu, and Feng Zhou. Robust lightweight facial expression recognition network with label distribution training. In AAAI , 2021. 3 [25] Zhuoran Zheng and Xiuyi Jia. Label distribution learning via implicit distribution representation. arXiv preprint arXiv:2209.13824 , 2022. 1, 2, 3, 6 12 Checklist Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count toward the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. 13
https://arxiv.org/abs/2505.21180v1
arXiv:2505.21182v1 [cs.LG] 27 May 2025Learning What to Do and What Not To Do: Offline Imitation from Expert and Undesirable Demonstrations Huy Hoang Singapore Management University mh.hoang.2024@phdcs.smu.edu.sgTien Mai Singapore Management University atmai@smu.edu.sg Pradeep Varakantham Singapore Management University pradeepv@smu.edu.sgTanvi Verma Institute of High Performance Computing Agency for Science, Technology and Research, Singapore Tanvi_Verma@ihpc.a-star.edu.sg Abstract Offline imitation learning typically learns from expert and unlabeled demonstra- tions, yet often overlooks the valuable signal in explicitly undesirable behaviors. In this work, we study offline imitation learning from contrasting behaviors, where the dataset contains both expert and undesirable demonstrations. We propose a novel formulation that optimizes a difference of KL divergences over the state-action vis- itation distributions of expert and undesirable (or bad) data. Although the resulting objective is a DC (Difference-of-Convex) program, we prove that it becomes convex when expert demonstrations outweigh undesirable demonstrations, enabling a prac- tical and stable non-adversarial training objective. Our method avoids adversarial training and handles both positive and negative demonstrations in a unified frame- work. Extensive experiments on standard offline imitation learning benchmarks demonstrate that our approach consistently outperforms state-of-the-art baselines. 1 Introduction Imitation learning [ 8,21,25,15,40] offers a compelling alternative to Reinforcement Learning (RL) [ 37,32,29] by enabling agents to learn directly from expert demonstrations without the need for explicit reward signals. This paradigm has been successfully applied in various domains, even with limited expert data, and is particularly effective in capturing complex human behaviors and preferences. Traditional imitation learning typically assumes access to high-quality expert demonstrations, which can be expensive and difficult to obtain [ 34,38,44]. In practice, datasets often contain a mixture of expert and sub-optimal demonstrations. Recent advances in imitation learning have begun to address this more realistic setting, aiming to develop algorithms that can leverage informative signals from both expert and non-expert data [4, 30, 15]. While existing imitation learning approaches in the mixed-quality setting typically assume that mixed- quality demonstrations are not drastically different from expert behavior, they often frame learning as mimicking both expert and sub-optimal trajectories—albeit with different weights [ 21,20,40]. However, in practice, mixed-quality data may contain poor or undesirable demonstrations that the agent should explicitly avoid. For example, in autonomous driving, undesirable demonstrations may include unsafe lane changes or traffic violations, which should not be imitated under any circumstances. Another example can be found in healthcare applications, where undesirable demon- Preprint. Under review. strations may correspond to incorrect diagnoses or unsafe treatment plans that could harm patients if imitated. Existing imitation learning approaches are limited in their ability to handle contrasting demonstrations. Most methods are either not explicitly designed to avoid undesirable behaviors, or are ill-equipped to deal with scenarios where both expert and undesirable demonstrations coexist within the dataset [ 39,42,15]. It is important to note that learning by mimicking expert or mildly sup-optimal demonstrations is often tractable, as the corresponding objective—typically framed as divergence minimization—is convex [ 21,20]. However, incorporating objectives that explicitly avoid bad (or undesirable) demonstrations can introduce non-convexities, making the optimization significantly more challenging. In this paper, we propose a unified framework that addresses these challenges, aiming
https://arxiv.org/abs/2505.21182v1
to bridge this gap in the current imitation learning literature. Specifically, we focus on the setting of offline imitation learning , where interaction with the environ- ment is not available, and assume that the dataset contains both expert andundesirable demonstrations. We make the following contributions: •We formulate the learning problem with the goal of matching expert behavior while explicitly avoiding undesirable demonstrations. Although the resulting training objective is expressed as the difference between two KL divergences (and is therefore difference-convex), we prove that it becomes convex when the expert component outweighs the undesirable one. This convexity is critical, as it enables us to reformulate the learning problem over the state-action visitation distribution as an more tractable unconstrained optimization via Lagrangian duality. Our objective stands in contrast to most existing distribution-matching imitation learning approaches, which typically rely solely on divergence minimization and naturally yield convex objectives. By introducing a divergence maximization term to account for undesirable behavior, we demonstrate that the overall objective remains convex and manageable. •We further enhance the learning objective by proposing a surrogate objective that lower- bounds the original one, offering the advantage of a non-adversarial and convex optimization problem in the Q-function space. In addition, we introduce a novel Q-weighted behavior cloning (BC) approach, supported by theoretical guarantees, for efficient policy extraction. •Extensive experiments on standard imitation learning benchmarks show that our method consistently outperforms existing approaches, both in conventional settings where datasets contain expert and unlabeled demonstrations, and in more realistic scenarios where explicitly undesirable demonstrations are included. 2 Related Works Imitation Learning. Imitation learning trains agents to mimic expert behavior from demonstrations, with Behavioral Cloning (BC) serving as a foundational method by maximizing the likelihood of expert actions. However, BC often suffers from distributional shift [ 34]. Recent work addresses this issue by leveraging the strong generalization capabilities of generative models [ 43,5]. Inspired by GANs [ 11], methods like GAIL [ 14] and AIRL [ 7] use a discriminator to align the learner’s policy with the expert’s, while SQIL [ 33] simplifies reward assignment by distinguishing expert and non-expert behaviors. Although effective, these approaches typically require online interaction, which may be impractical in many real-world scenarios. To address this, offline methods such as AlgaeDICE [ 31] and ValueDICE [ 22] employ Stationary Distribution Correction Estimation (DICE), though they often encounter stability issues. Building on ValueDICE, O-NAIL [ 3] avoids adversarial training, enabling stable offline imitation. More recently, several approaches have extended the DICE framework with stronger theoretical foundations and improved empirical performance [ 24,28]. In parallel, IQ-Learn [ 8] has emerged as a unified framework for both online and offline imitation learning, inspiring a range of follow-up works [ 2,16]. However, all these approaches rely on the presence of many expert demonstrations, which may not always be available. Offline imitation learning from suboptimal demonstrations: Several approaches have been developed to tackle the challenges of offline imitation learning from suboptimal data, which is common in real-world scenarios. A notable direction involves preference-based methods, where algorithms infer reward functions by leveraging ranked or pairwise-compared trajectories to guide
https://arxiv.org/abs/2505.21182v1
learning [ 19,18,13]. Recent works, such as SPRINQL [ 15], take advantage of demonstrations 2 that exhibit varying levels of suboptimality, enabling the learner to better generalize beyond near- optimal behaviors. Another important line of research explores the use of unlabeled demonstrations in conjunction with a limited number of expert trajectories. Techniques like DemoDICE [ 21], SMODICE [ 27], and ReCOIL [ 35] apply Distribution Correction Estimation (DICE) [ 36,24,28] to re- weight trajectories and align the state or state-action distributions with those of the expert. In parallel, classifier-based methods, such as DWBC [ 40], ISW-BC [ 25], and ILID [ 41], use discriminators to distinguish expert-like behaviors within mixed-quality data and assign them greater importance. Collectively, these strategies aim to enhance policy robustness and performance in offline settings where high-quality expert data is scarce or expensive to obtain. However, all of these approaches are primarily focused on imitating and are unable to avoid undesirable or bad demonstrations, which is crucial in domains such as self driving where there are many unsafe behaviors that would need to be avoided. SafeDICE [ 17] was introduced to address this problem of avoiding undesirable or bad demonstrations. However, SafeDICE is not designed to handle scenarios where both expert and undesirable datasets are available. Moreover, their approach still relies on minimizing a divergence between the learning policy and a mixture of unlabeled and undesirable data—an approach that is vulnerable to the quality of the unlabeled dataset and may degrade when such data is of low quality. In this paper, we aim to optimize on the principle of "Imitate the Good and Avoid the Bad", which has recently gained attention in reference and safe reinforcement learning [ 1,15,10] and large language model training [ 26]. We extend this idea to the offline imitation setting by proposing a novel and efficient method that learns from expert demonstrations while avoiding undesirable ones. To our knowledge, this is the first offline imitation learning approach to efficiently learn policies by jointly utilizing both expert and undesirable demonstrations. 3 Preliminaries Markov Decision Process (MDP). We consider a MDP defined by the following tuple M= ⟨S, A, r, P, γ, s 0⟩, where Sdenotes the set of states, s0represents the initial state set, Ais the set of actions, r:S×A→Rdefines the reward function for each state-action pair, and P:S×A→Sis the transition function, i.e., P(s′|s, a)is the probability of reaching state s′∈Swhen action a∈A is made at state s∈S, andγis the discount factor. In reinforcement learning (RL), the aim is to find a policy that maximizes the expected long-term accumulated reward: max π E(s,a)∼dπ[r(s, a)] , where dπis the occupancy measure (or state-action visitation distribution) of policy π:dπ(s, a) = (1−γ)π(a|s)P∞ t=1γtP(st=s|π). Offline Imitation Leaning. Recent imitation learning (IL) approaches have adopted a distribution- matching formulation, where the objective is to minimize the divergence between the occupancy measures (i.e., state-action visitation distributions) of the learning policy and the expert pol- icy:mindπ Df dπ∥dE ,where Dfdenotes an f-divergence between the occupancy distri- butions dπ(induced by the learning policy π) and dE(induced by the expert policy). In
https://arxiv.org/abs/2505.21182v1
par- ticular, when the Kullback–Leibler (KL) divergence is used, the learning objective becomes: mindπE(s,a)∼dπh log dπ(s,a) dE(s,a)i .In the space of state-action visitation distributions ( dπ), the training can be formulated as a convex constrained optimization problem. To enable efficient training, Lagrangian duality is typically employed to recast the problem into an unconstrained form [24, 21]. Offline IL with unlabeled data. In offline imitation learning with unlabeled data, it is typically assumed that a limited set of expert demonstrations BEis available, along with a larger set of unlabeled demonstrations BMIX. Distribution-matching approaches have been widely adopted to handle this setting. Prior methods often formulate the objective as a weighted sum of divergences between the learning policy and both expert and unlabeled data: mindπ Df dπ∥dE +αDf(dπ∥dMIX) ,where α≥0.Other approaches construct mixtures of occupancy distributions, such as dπ,MIX=αdπ+ (1−α)dMIXanddE,MIX=αdE+ (1−α)dMIX,and minimize the divergence between dπ,MIXanddE,MIX[21,20,27,35]. In most existing approaches along this line of research, the convexity of the objective with respect to dπhas been heavily leveraged to derive tractable learning objectives. However, when a divergence maximization term is intro- 3 duced—as in our approach—this convexity may no longer hold, rendering many existing methods inapplicable. 4 ContraDICE: Offline Imitation Learning from Contrasting Behaviors We begin by introducing a novel learning objective based on the difference between two KL di- vergences. Leveraging the convexity of this formulation, we derive a tractable and unconstrained optimization problem. Given that the resulting objective includes exponential terms that may lead to numerical instability, we enhance this by proposing a lower-bound approximation. This approxima- tion enables us to reformulate the learning process as a more tractable, non-adversarial Q-learning objective, which remains convex in the space of Q-functions. 4.1 Dual KL-Based Formulation Assume that we have access to three sets of demonstrations: good dataset BGcontains good orexpert demonstrations, bad dataset BBcontains badorundesirable demonstrations that the agent should avoid, and the unlabeled dataset BMIXis a large set of unlabeled demonstrations used to support offline training. We consider the realistic scenario where the identified datasets BGandBBare limited in size, while BMIXis significantly larger—an assumption that aligns with typical settings in offline imitation learning from unlabeled demonstrations. Letdπ(s, a),dG(s, a), and dB(s, a)denote the state-action visitation distributions induced by the learned policy π, the good policy, and the bad policy, respectively. Following the DICE framework [ 31, 22], we propose to optimize the following training objective: min dπf(dπ) =DKL(dπ∥dG)−α D KL(dπ∥dB), (1) where α >0is a tunable hyperparameter. The goal of this objective is twofold: (1) to minimize the divergence between the learned policy and the good policy, and (2) to maximize the divergence from the bad policy, thereby avoiding undesirable behavior. This formulation differs from all existing DICE-based approaches in the literature, which primarily focus on minimizing KL divergence—even when dealing with undesirable or unsafe demonstrations. By contrast, our approach introduces a principled mechanism to explicitly repel the learned policy from undesirable behavior while still aligning it with good data. While the presence of a KL divergence maximization term in the objective may raise concerns about the convexity of the training problem, we
https://arxiv.org/abs/2505.21182v1
observe that the objective in (1)takes the form of a difference between two convex functions. This is, in general, not convex and can be challenging to optimize. Fortunately, we show that under a mild condition, the overall objective remains convex. Specifically, if the weight on the bad policy divergence term is smaller than that on the good policy (i.e.,α <1), then the objective becomes convex in dπ. Proposition 4.1. Ifα≤1, then the objective function f(dπ) =DKL(dπ∥dG)−α D KL(dπ∥dB)is convex in dπ. Convexity is essential in most DICE-based frameworks, as it enables the use of Lagrangian duality to construct well-behaved and tractable training objectives. Our goal is to develop a Q-learning method that recovers a policy minimizing the objective in (1). To this end, we formulate the problem as the following constrained optimization: min d,πf(d, π) =DKL(d∥dG)−α D KL(d∥dB) (2) s.t.d(s, a) = (1 −γ)p0(s)π(a|s) +γπ(a|s)X s′,a′d(s′, a′)T(s|s′, a′), where d(s, a)is the state-action visitation distribution, and Tis the environment transition function. LetBU=BG∪ BMIXdenote the union dataset, and let dUbe the state-action visitation distribution derived from it. The following proposition gives an another formulation for the objective in (1): Proposition 4.2. The objective function in (2)can be written as: f(d, π) = (1 −α)DKL(d||dU)− E(s,a)∼d[Ψ(s, a)], where Ψ(s, a) = logdG(s,a) dU(s,a)−αlogdB(s,a) dU(s,a). 4 This formulation introduces a KL-based regularization centered on the reference distribution dU, withΨ(s, a)acting as a correction term that incorporates information from the labeled good and bad demonstrations. The reformulated objective in Proposition 4.2 further confirms that the function f(d, π)remains convex in dwhen α≤1. Here we note that, under the same condition α≤1, convexity may not hold for other f-divergences (a detailed discussion is provided in the appendix). Given the convexity of the objective in (1), we can equivalently move the constraints into the objective using Lagrangian duality, leading to the following Q-learning formulation (details of the derivation are given in the appendix): max πmin Qn (1−γ)E(s,a)∼p0,π[Q(s, a)] + (1−α)E(s,a)∼dU expΨ(s, a) +γE(s′,a′)∼T,π[Q(s′, a′)]−Q(s, a) 1−αo To further enhance the efficiency of Q-learning, we adopt the well-known Maximum Entropy (MaxEnt) reinforcement learning framework by incorporating an entropy term into the training objective [8, 12]. This leads to the following objective: L(Q, π) = (1 −γ)E(s,a)∼p0,π Q(s, a)−βlogπ(a|s) µU(a|s) + (1−α)E(s,a)∼dU exp Ψ(s, a) +γE(s′,a′)∼T,πh Q(s′, a′)−βlogπ(a′|s′) µU(a′|s′)i −Q(s, a) 1−α  . where µU(a|s)is the behavior policy representing the union dataset BU. We now define the soft value function and the soft Bellman operator as follows: Vπ Q(s) =Ea∼π(·|s) Q(s, a)−βlogπ(a|s) µU(a|s) ,Tπ[Q](s, a) =Q(s, a)−γEs′∼T(·|s,a) Vπ Q(s′) . Using these definitions, the training objective can be rewritten as: L(Q, π) = (1 −γ)Es∼p0 Vπ Q(s) + (1−α)E(s,a)∼dU expΨ(s, a)− Tπ[Q](s, a) 1−α .(3) This formulation shares structural similarities with IQ-Learn, where Tπ[Q](s, a)is referred to as theinverse Bellman operator and is often interpreted as a reward function expressed in terms of the Q-function itself. Remark. The objective in Equation (3)is valid only when α < 1. In the special case where α= 1, i.e., when the bad demonstrations are weighted equally to the expert demonstrations—the training objective
https://arxiv.org/abs/2505.21182v1
simplifies significantly. According to Proposition 4.2, the training objective reduces to a standard offline RL problem with reward function Ψ(s, a):max dE(s,a)∼d[Ψ(s, a)] = maxE[P∞ t=0γtΨ(st, at)]. 4.2 Tractable Lower Bounded Objective In this section, we propose an additional step to improve the stability and tractability of the learning objective introduced above. We first observe that the exponential term in Equation (3)may lead to instability during training. To address this issue, we propose to approximate the exponential using a linear lower bound, which not only improves stability but also preserves a similar optimization objective. Proposition 4.3. Let the surrogate objective be defined as: eL(Q, π) = (1 −γ)Es∼p0 Vπ Q(s) −EdU[δ(s, a)Tπ[Q](s, a)] + (1 −α)EdU[δ(s, a)].(4) where δ(s, a) = exp Ψ(s,a) 1−α .TheneL(Q, π)is a lower bound of L(Q, π), with equality when Tπ[Q](s, a) = 0 for all (s, a). 5 The lower-bound approximation eL(Q, π)offers several benefits. First, as a valid lower bound of L(Q, π), maximizing eL(Q, π)promotes the original objective. Second, its structure—linear in Q and concave in π—leads to a simplified, non-adversarial training procedure (see Proposition 4.4). Finally, its optimization goals remain aligned with those of L(Q, π), encouraging high expected soft value under the initial state distribution and consistency between the soft Bellman residual and the guidance signal Ψ(s, a). Remark. We note that the training objective in Equation (4)generalizes the IQ-Learn objective [ 8] as a special case. In particular, eL(Q, π)reduces exactly to the IQ-Learn objective when α= 0(i.e., the undesirable dataset is ignored) and BG≡ BU(i.e., the good dataset coincides with the union dataset). To see this, observe that when α= 0 anddG=dU, the term Ψ(s, a)becomes zero for all(s, a). As a result, the surrogate objective simplifies to: eL(Q, π) = (1 −γ)Es∼p0 Vπ Q(s) − E(s,a)∼dG[Tπ[Q](s, a)],which is exactly the training objective proposed in IQ-Learn. Thus, our formulation can be viewed as a principled extension of IQ-Learn that explicitly accounts for and contrasts between good and bad behaviors. We now present several key properties of the training objective eL(Q, π)that make it particularly convenient and tractable for use, as formalized in Proposition 4.4 below. Proposition 4.4. The following properties hold: (i)eL(Q, π)is linear in Qand concave in π. As a result, the max–min optimization can be equiv- alently reformulated as a min–max problem: max πminQeL(Q, π) = min Qmax πeL(Q, π). (ii)The min–max problem minQmax πeL(Q, π)reduces to the following non-adversarial prob- lem: min Q eL(Q) = (1 −γ)Es∼p0[VQ(s)]−E(s,a)∼dU expΨ(s, a) 1−α T[Q](s, a) , where the soft value function VQ(s)is defined as: VQ(s) = βlogP aµU(a|s) exp( Q(s, a)/β) ,and the soft Bellman residual operator is given by: T[Q](s, a) =Q(s, a)−γVQ(s).Moreover eL(Q)is convex in Q. 5 Practical Algorithm Estimating Occupancy Ratios. The training objective involves several ratios between state-action visitation distributions, which are not directly observable. These quantities can be estimated by solving corresponding discriminator problems. Specifically, to estimate the ratiodG(s,a) dU(s,a), we train a binary classifier cG:S × A → [0,1]by solving the following standard logistic regression objective: max cG E(s,a)∼dG logcG(s, a) +E(s,a)∼dU log(1−cG(s, a))
https://arxiv.org/abs/2505.21182v1
. (5) LetcG∗(s, a)be optimal solution to this problem, then the ratio can be computed as:dG(s,a) dU(s,a)= cG∗(s,a) 1−cG∗(s,a).Similar discriminators can be trained to estimate other ratios such asdB(s,a) dU(s,a). Implicit V-Update and Regularizers. In the surrogate objective eL(Q), the value function VQis typically computed via a log-sum-exp over Q, which becomes intractable in large or continuous action spaces. To address this, we adopt Extreme Q-Learning (XQL) [ 9], which avoids the log-sum-exp by introducing an auxiliary optimization over V, jointly updated with Q. Specifically, Vis optimized using the Extreme-V objective: J(V|Q) =E(s,a)∼dU et(s,a)−t(s, a)−1 ,where t(s, a) = Q(s,a)−V(s) β.The main training objective with fixed Vis: eL(Q|V) = (1 −γ)Es∼p0[V(s)]−E(s,a)∼dU expΨ(s, a) 1−α (Q(s, a)−γEs′[V(s′)]) .(6) The overall optimization proceeds by alternating: (i) updating Qvia minimizing eL(Q|V), and (ii) updating Vvia minimizing J(V|Q). Both sub-problems are convex, enabling efficient and stable 6 training. To further enhance stability, we follow [ 8,9] and add a convex regularizer ϕ(T[Q](s, a))to prevent reward divergence. We use the χ2-divergence, ϕ(t) =t2/2, a common choice in Q-learning. Policy Extraction Once the QandVfunctions are obtained, a common approach for expert policy extraction is to apply advantage-weighted behavior cloning (AW-BC) [23, 9, 13, 35]: max πX (s,a)∼BUexp1 β(Q(s, a)−V(s)) logπ(a|s). (7) A key limitation of this formulation is that the value function V(s)is only an approximate estimate from the Extreme-V objective, potentially introducing noise and bias into advantage computation and degrading policy quality. To address this, we propose a Q-only alternative that avoids reliance on V(s). The following proposition shows that this Q-based objective can, in theory, recover the same optimal policy as the original advantage-weighted BC formulation. Proposition 5.1. The following Q-weighted behavior cloning (BC) objective yields the same optimal policy as the original advantage-weighted BC formulation in (7): max πX (s,a)∼BUexp1 βQ(s, a) logπ(a|s). (8) Algorithm 1 ContraDICE Require: Datasets BG,BB,BMIX; training steps Nµ,N; models: cG wG,cB wB,πθ,Qwq,Vwv 1:Assign BU=BG∪ BMIX 2:#Train discriminator cG wGandcB wB 3:fori= 1toNµdo 4: Update (wG, wB)to minimize Objective 5. 5:end for 6:#Train QwqandVwv, and policy πθ 7:fori= 1toNdo 8: Update wqto minimize eF(Qwq|Vwv) 9: Update wvto minimize J(Vwv|Qwq) 10: Update θvia QW-BC: max πnP (s,a)∼BUeQ(s,a)/βlogπ(a|s)o 11:end forWhile the Q-weighted BC objective is theoretically equivalent to the advantage- weighted BC objective in terms of the op- timal policy it recovers, it provides a sim- pler and more practical formulation. This simplification can lead to more stable and accurate optimization in practice. Our experimental results further demonstrate that the Q-weighted formulation consis- tently yields significantly better training outcomes compared to the advantage- weighted BC baseline. Bringing all com- ponents together, we present our CON- TRADICE algorithm in Algorithm 1. 6 Experiments In this section, we conduct extensive experiments to evaluate our method, focusing on the following key questions: (Q1) Can ContraDICE effectively leverage both labeled good and bad data to outper- form existing baselines? (Q2) How does the size of the bad dataset BBaffect the performance of ContraDICE? (Q3) ContraDICE relies on an important parameter αto balance the objectives for good and bad data—how does this parameter affect overall performance? Moreover, we
https://arxiv.org/abs/2505.21182v1
also provide some additional experiments in the Appendix. 6.1 Experiment setting Environments and Dataset Generation. We evaluate our method in the context of learning from the good dataset BGand avoid the bad dataset BBwith a support from an additional unlabeled dataset BMIX. The use of such unlabeled data is common in offline imitation learning from mixed-quality demonstrations. Our experiments span four MuJoCo locomotion tasks: CHEETAH ,ANT,HOPPER , WALKER , as well as four hand manipulation tasks from Adroit: PEN,HAMMER ,DOOR ,RELOCATE , and one task from FrankaKitchen: KITCHEN —all sourced from the official D4RL benchmark [ 6]. For each MuJoCo task from D4RL, we have three types of datasets: RANDOM ,MEDIUM , and EXPERT . The good dataset BGis constructed using a single trajectory from the EXPERT dataset. The bad dataset BBconsists of 10 trajectories selected from either the RANDOM orMEDIUM dataset. To construct the unlabeled dataset BMIX, we combine the entire RANDOM orMEDIUM dataset (i.e., the same source asBB) with 30 additional trajectories from the EXPERT dataset. This setup mirrors the challenging 7 RANDOM +FEW-EXPERT and MEDIUM +FEW-EXPERT scenarios introduced in ReCOIL [ 35]. These three datasets— BG,BB, andBMIX—form the foundation of our training pipeline. We use the same dataset construction strategy for Adroit and FrankaKitchen tasks, yielding 18 distinct dataset combinations. Please refer to the Appendix for detailed descriptions of all dataset combinations. Baselines. We compare our method against several baselines. First, we evaluate two naive Behavioral Cloning approaches: one that learns directly from the large unlabeled dataset BMIX(BC-MIX), and one that learns solely from the good dataset BG(BC-G). Next, we include comparisons with state-of-the- art methods designed to leverage both expert (or good) data BGand unlabeled data BMIX, including SMODICE [ 27], ILID [ 41], and ReCOIL [ 35]. We exclude DWBC [ 40] from this experiment since both DWBC and ILID use discriminator-based objectives, and ILID has been shown to outperform DWBC. In addition, based on our proposed objective in (4), we include a variant of our method that only learns from BGandBMIX(i.e.,α= 0), called as ContraDICE-G. For methods that incorporate support from bad data BB, we evaluate our approach against SafeDICE [ 17]. Given the limited number of existing baselines that effectively utilize poor-quality data in offline imitation learning, we also propose a simple adaptation of DWBC, which is called as DWBC-GB to jointly learn from BG, BB, andBMIX. Detailed implementation of these baselines are provided in the Appendix. Evaluation Metrics. We evaluate all methods using five training seeds. For each seed, we collect the results from the last 10 evaluations (each evaluation consist 10 different environment seeds), then aggregate all evaluations across seeds to compute the mean and standard deviation, which reflect the converged performance of each method. Across all experiments, we report the normalized score commonly used in D4RL tasks Normalized Score =Score−Random Score Expert Score −Random Score . This normalization provides a consistent performance measure across different environments. Reproducibility. We provide detailed hyperparameters and network architectures for each task in the Appendix. To ensure reproducibility and comparison, the source code is publicly available at:
https://arxiv.org/abs/2505.21182v1
https://github.com/hmhuy0/ContraDICE . 6.2 Main Comparison Task unlabeled BMIX learning from BGandBMIXonly learning with BB BC-MIX BC-G SMODICE ILID ReCOIL ContraDICE-G SafeDICE DWBC-GB ContraDICE Expert CHEETAHRANDOM +EXPERT 2.3±0.0−0.6±0.7 4.6±2.7 21.1±7.6 2.0±0.6 84.4±5.3 −0.0±0.0 2.8±1.1 86.7±5.0 90.6 MEDIUM +EXPERT 42.5±0.5−0.6±0.7 42.4±3.5 40.3±15.642.5±0.6 48.6±4.4 37.7±0.3 5.6±4.3 77.6±8.1 90.6 ANTRANDOM +EXPERT 30.9±0.1−7.2±10.34.6±21.6 71.8±19.456.2±11.2 100.6±22.1 −2.6±0.0 6.5±7.5 112.7±12.9 117.5 MEDIUM +EXPERT 91.2±1.9−7.2±10.388.5±9.3 39.6±25.7100.8±9.0 102.4±7.8 88.1±0.9−4.3±5.3 107.4±11.0 117.5 HOPPERRANDOM +EXPERT 4.9±0.2 17.9±6.1 56.4±20.6 81.6±32.081.0±32.8 79.4±33.1 41.1±3.1 40.8±21.3 93.6±20.5 109.6 MEDIUM +EXPERT 52.2±1.3 17.9±6.1 53.0±3.7 87.9±11.946.1±18.5 70.6±17.9 55.8±3.7 21.6±8.9 103.7±16.3 109.6 WALKERRANDOM +EXPERT 1.5±0.1 3.8±3.3 106.6±1.5 100.1±9.829.8±33.4 97.5±24.0 23.0±1.8 17.4±16.7 107.4±3.7 107.7 MEDIUM +EXPERT 70.8±0.7 3.8±3.3 6.0±5.0 89.7±23.772.1±12.1 99.8±15.5 60.2±2.9 25.6±16.6 108.2±0.9 107.7 PENCLONED +EXPERT 56.0±1.1 8.8±3.1 10.9±14.6 1.9±4.7 79.2±21.4 66.3±21.5 19.9±4.6 9.5±8.8 96.4±19.4 107.0 HUMAN +EXPERT 18.3±1.4 8.8±3.1 −2.5±0.5 5.1±4.8 99.9±18.9 95.5±19.7 21.8±5.7 6.5±5.3 101.5±18.7 107.0 HAMMERCLONED +EXPERT 0.4±0.8 1.4±0.7 0.8±0.9 0.4±1.3 3.4±4.6 66.5±26.3 0.0±0.2 2.8±5.6 74.3±17.8 119.0 HUMAN +EXPERT 12.8±7.3 1.4±0.7 1.9±4.6 1.2±3.1 113.2±12.4113.2±16.1 0.6±0.8 3.4±4.2 120.0±8.3 119.0 DOORCLONED +EXPERT 0.4±0.7−0.1±0.1−0.1±0.1−0.1±0.2 19.3±16.7 92.6±11.3 −0.0±0.0−0.1±0.1 102.4±3.8 105.3 HUMAN +EXPERT 4.0±2.6−0.1±0.1−0.1±0.7 0.2±1.6 100.3±6.4 104.7±1.5 0.9±0.9 1.1±1.1 105.0±1.2 105.3 RELOCATECLONED +EXPERT −0.1±0.1−0.1±0.1 0.1±0.2 −0.1±0.1 1.4±2.4 34.5±13.9 −0.1±0.0−0.2±0.1 92.1±11.1 100.9 HUMAN +EXPERT 0.0±0.1−0.1±0.1−0.2±0.1−0.2±0.2 72.3±12.6 99.1±6.9 0.0±0.1−0.1±0.0 102.6±5.3 100.9 KITCHENPARTIAL +COMPLETE 45.5±1.9 2.5±5.0 5.5±8.2 27.3±5.4 48.8±8.9 45.8±14.8 2.8±1.1 19.4±4.6 53.1±13.1 75.0 MIXED +COMPLETE 42.1±1.1 2.2±3.8 3.1±5.8 13.3±3.1 50.6±3.8 20.3±14.1 1.5±1.9 6.7±4.4 48.9±16.4 75.0 Average 26.4 2.9 21.2 32.4 56.6 78.8 19.5 9.2 94.1 Table 1: Comparison with other baselines in MuJoCo, Adroit, and FrankaKitchen. The results are normalized score in mean and standard deviation. To answer Question (Q1) , we present a comprehensive comparison between our method and existing baselines across 18 different datasets, as shown in Table 1. First, both BC-MIX and BC-G fail to achieve satisfactory performance across tasks. When learning from the good dataset BGand the unlabeled dataset BMIX, methods like SMODICE and ILID perform reasonably well on the four MuJoCo locomotion tasks ( CHEETAH ,ANT,HOPPER ,WALKER ) but completely fail on the five hand manipulation tasks. In contrast, ReCOIL and our method variant (ContraDICE-G) are able to successfully learn in both locomotion and manipulation tasks, demonstrating more robust generalization. 8 In the setting that incorporates additional low-quality data BB, SafeDICE shows similar performance to SMODICE and ILID—again failing on the manipulation tasks. Furthermore, DWBC-GB fails to learn entirely, highlighting that a naive adaptation for leveraging poor-quality data can harm the learning process. These results suggest that incorporating bad data BBintroduces new challenges, and that effectively utilizing such data requires a carefully designed algorithm grounded in strong theoretical principles. Overall, our method successfully leverages the bad dataset BBand consistently outperforms all other baselines across both locomotion and manipulation tasks. 6.3 Effect of Number of Bad Demonstrations Score 0 1 10 25 50 100 # bad trajectories255075100 CHEETAH (RANDOM + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 HOPPER (RANDOM + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 HAMMER (CLONED + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 RELOCATE (CLONED + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 KITCHEN (PARTIAL + COMPLETE) expert SafeDICE DWBC-GB ContraDICE Figure 1: Effect
https://arxiv.org/abs/2505.21182v1
of the size of the bad dataset BBon learning performance: The results are averaged over five different training seeds and reported using normalized scores. As the number of bad trajec- tories increases, our method demonstrates a strong ability to leverage this data. In contrast, baseline methods such as SafeDICE and DWBC-GB struggle to make effective use of bad demonstrations. To answer question (Q2) , we investigate the impact of the size of the undesirable (bad) dataset on methods designed to learn from bad data. Specifically, we gradually increase the size of the bad dataset BBand evaluate how the performance of each algorithm is affected. The experimental results are presented in Figure 1. Overall, SafeDICE fails to effectively utilize the bad demonstrations, while DWBC-GB is only able to learn in the HOPPER task. In contrast, our method demonstrates strong scalability with respect to the size of the bad dataset, maintaining good performance even when provided with as few as a single bad trajectory. 6.4 Sensitivity Analysis of α 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 α80100ScoreHOPPER (RANDOM + EXPERT) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 α050100ScoreHAMMER (CLONED + EXPERT) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 α4060ScoreKITCHEN (PARTIAL + COMPLETE) Figure 2: Sensitivity analysis on the trade-off parameter α.From our objective function (1), we introduce a hyper- parameter 0≤α < 1, which controls the weighting of the bad data objective—this relates to question (Q3) . To evaluate the sensitivity of our method to α, we conduct experiments by varying its value and observing the ef- fect on final performance, as shown in Figure 2. While α does have a noticeable impact, our method remains robust across a broad range of values, with optimal performance observed within this range. The specific αvalues used for each task are provided in the Appendix. 7 Conclusion We introduced a new offline imitation learning framework that leverages both expert and explicitly undesirable demonstrations. By formulating the learning objective as the difference of KL divergences over visitation distributions, we capture informative contrasts between good and bad behaviors. While the resulting DC program is generally non-convex, we establish conditions under which it becomes convex—specifically, when expert data dominates—leading to a practical, stable, and non-adversarial training procedure. Our unified approach to handling both expert and undesirable demonstrations yields superior performance across a range of offline imitation learning benchmarks, setting a new standard for learning from contrasting behaviors. 9 Limitations and Future Work. While our method shows strong empirical performance, it is currently limited to settings where α≤1. Relaxing this constraint would make the learning objective more challenging to optimize, but represents a promising direction for future research. Additionally, we assume access to well-labeled expert and undesirable demonstrations, which may not hold in practice. Developing robust methods that can learn effectively from noisy or weakly labeled data would be a valuable extension of this work. References [1]Abbas Abdolmaleki, Bilal Piot, Bobak Shahriari, Jost Tobias Springenberg, Tim Hertweck, Michael Bloesch, Rishabh Joshi, Thomas Lampe, Junhyuk Oh, Nicolas Heess, Jonas Buchli, and Martin Riedmiller.
https://arxiv.org/abs/2505.21182v1
Learning from negative feedback, or positive feedback or both. In The Thirteenth International Conference on Learning Representations , 2025. [2]Firas Al-Hafez, Davide Tateo, Oleg Arenz, Guoping Zhao, and Jan Peters. Ls-iq: Implicit reward regularization for inverse reinforcement learning. In Eleventh International Conference on Learning Representations (ICLR) , 2023. [3]Oleg Arenz and Gerhard Neumann. Non-adversarial imitation learning and its connections to adversarial methods. arXiv preprint arXiv:2008.03525 , 2020. [4]Daniel Brown, Wonjoon Goo, Prabhat Nagarajan, and Scott Niekum. Extrapolating beyond sub- optimal demonstrations via inverse reinforcement learning from observations. In International conference on machine learning , pages 783–792. PMLR, 2019. [5]Cheng Chi, Zhenjia Xu, Siyuan Feng, Eric Cousineau, Yilun Du, Benjamin Burchfiel, Russ Tedrake, and Shuran Song. Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research , page 02783649241273668, 2023. [6]Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning, 2020. [7]Justin Fu, Katie Luo, and Sergey Levine. Learning robust rewards with adverserial inverse reinforcement learning. In International Conference on Learning Representations , 2018. [8]Divyansh Garg, Shuvam Chakraborty, Chris Cundy, Jiaming Song, and Stefano Ermon. Iq-learn: Inverse soft-q learning for imitation. Advances in Neural Information Processing Systems , 34:4028–4039, 2021. [9]Divyansh Garg, Joey Hejna, Matthieu Geist, and Stefano Ermon. Extreme q-learning: Maxent rl without entropy. In International Conference on Learning Representations (ICLR) , 2023. [10] Ze Gong, Akshat Kumar, and Pradeep Varakantham. Offline safe reinforcement learning using trajectory classification. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 16880–16887, 2025. [11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems , 27, 2014. [12] Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, et al. Soft actor-critic algorithms and applications. arXiv preprint arXiv:1812.05905 , 2018. [13] Joey Hejna and Dorsa Sadigh. Inverse preference learning: Preference-based rl without a reward function. Advances in Neural Information Processing Systems , 36, 2024. [14] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. Advances in neural information processing systems , 29, 2016. [15] Huy Hoang, Tien Mai, and Pradeep Varakantham. Imitate the good and avoid the bad: An incremental approach to safe reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 38, pages 12439–12447, 2024. 10 [16] Huy Hoang, Tien Anh Mai, and Pradeep Varakantham. SPRINQL: Sub-optimal demonstrations driven offline imitation learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems , 2024. [17] Youngsoo Jang, Geon-Hyeong Kim, Jongmin Lee, Sungryull Sohn, Byoungjip Kim, Honglak Lee, and Moontae Lee. Safedice: offline safe imitation learning with non-preferred demonstra- tions. Advances in Neural Information Processing Systems , 36, 2024. [18] Yachen Kang, Diyuan Shi, Jinxin Liu, Li He, and Donglin Wang. Beyond reward: Offline preference-guided policy optimization. In International Conference on Machine Learning , pages 15753–15768. PMLR, 2023. [19] Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, and Kimin Lee. Preference transformer: Modeling human preferences using transformers for
https://arxiv.org/abs/2505.21182v1
rl. In The Eleventh International Conference on Learning Representations , 2023. [20] Geon-Hyeong Kim, Jongmin Lee, Youngsoo Jang, Hongseok Yang, and Kee-Eung Kim. Lobs- dice: Offline learning from observation via stationary distribution correction estimation. Ad- vances in Neural Information Processing Systems , 35:8252–8264, 2022. [21] Geon-Hyeong Kim, Seokin Seo, Jongmin Lee, Wonseok Jeon, HyeongJoo Hwang, Hongseok Yang, and Kee-Eung Kim. Demodice: Offline imitation learning with supplementary imperfect demonstrations. In International Conference on Learning Representations , 2021. [22] Ilya Kostrikov, Ofir Nachum, and Jonathan Tompson. Imitation learning via off-policy distribu- tion matching. In International Conference on Learning Representations , 2020. [23] Ilya Kostrikov, Ashvin Nair, and Sergey Levine. Offline reinforcement learning with implicit q-learning. arXiv preprint arXiv:2110.06169 , 2021. [24] Jongmin Lee, Wonseok Jeon, Byungjun Lee, Joelle Pineau, and Kee-Eung Kim. Optidice: Offline policy optimization via stationary distribution correction estimation. In International Conference on Machine Learning , pages 6120–6130. PMLR, 2021. [25] Ziniu Li, Tian Xu, Zeyu Qin, Yang Yu, and Zhi-Quan Luo. Imitation learning from imperfection: Theoretical justifications and algorithms. In Advances in Neural Information Processing Systems 37, 2023. [26] Yuxiao Lu, Arunesh Sinha, and Pradeep Varakantham. Semantic loss guided data efficient supervised fine tuning for safe responses in LLMs. In The Thirteenth International Conference on Learning Representations , 2025. [27] Yecheng Ma, Andrew Shen, Dinesh Jayaraman, and Osbert Bastani. Versatile offline imitation from observations and examples via regularized state-occupancy matching. In International Conference on Machine Learning , pages 14639–14663. PMLR, 2022. [28] Liyuan Mao, Haoran Xu, Weinan Zhang, and Xianyuan Zhan. ODICE: Revealing the mystery of distribution correction estimation via orthogonal-gradient update. In The Twelfth International Conference on Learning Representations , 2024. [29] V olodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. nature , 518(7540):529–533, 2015. [30] Vivek Myers, Erdem Biyik, Nima Anari, and Dorsa Sadigh. Learning multimodal rewards from rankings. In Conference on robot learning , pages 342–352. PMLR, 2022. [31] Ofir Nachum, Bo Dai, Ilya Kostrikov, Yinlam Chow, Lihong Li, and Dale Schuurmans. Al- gaedice: Policy gradient from arbitrary experience. arXiv preprint arXiv:1912.02074 , 2019. [32] Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming . John Wiley & Sons, 2014. [33] Siddharth Reddy, Anca D Dragan, and Sergey Levine. Sqil: Imitation learning via reinforcement learning with sparse rewards. arXiv preprint arXiv:1905.11108 , 2019. 11 [34] Stéphane Ross, Geoffrey Gordon, and Drew Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In Proceedings of the fourteenth interna- tional conference on artificial intelligence and statistics , pages 627–635. JMLR Workshop and Conference Proceedings, 2011. [35] Harshit Sikchi, Qinqing Zheng, Amy Zhang, and Scott Niekum. Dual rl: Unification and new methods for reinforcement and imitation learning. In Proceedings of the 12th International Conference on Learning Representations (ICLR) , 2024. [36] Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, et al. Value- decomposition networks for cooperative multi-agent learning. arXiv preprint
https://arxiv.org/abs/2505.21182v1
arXiv:1706.05296 , 2017. [37] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction . The MIT Press, second edition, 2018. [38] Faraz Torabi, Garrett Warnell, and Peter Stone. Behavioral cloning from observation. arXiv preprint arXiv:1805.01954 , 2018. [39] Yueh-Hua Wu, Nontawat Charoenphakdee, Han Bao, V oot Tangkaratt, and Masashi Sugiyama. Imitation learning from imperfect demonstration. In International Conference on Machine Learning , pages 6818–6827. PMLR, 2019. [40] Haoran Xu, Xianyuan Zhan, Honglei Yin, and Huiling Qin. Discriminator-weighted offline imitation learning from suboptimal demonstrations. In Proceedings of the 39th International Conference on Machine Learning , pages 24725–24742, 2022. [41] Sheng Yue, Jiani Liu, Xingyuan Hua, Ju Ren, Sen Lin, Junshan Zhang, and Yaoxue Zhang. How to leverage diverse demonstrations in offline imitation learning. In Forty-first International Conference on Machine Learning , 2024. [42] Songyuan Zhang, Zhangjie Cao, Dorsa Sadigh, and Yanan Sui. Confidence-aware imitation learning from demonstrations with varying optimality. Advances in Neural Information Pro- cessing Systems , 34:12340–12350, 2021. [43] Tony Zhao, Vikash Kumar, Sergey Levine, and Chelsea Finn. Learning fine-grained bimanual manipulation with low-cost hardware. Robotics: Science and Systems XIX , 2023. [44] Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, and Sergey Levine. The ingredients of real world robotic reinforcement learning. In International Conference on Learning Representations , 2020. 12 Appendix A Missing Proofs Proposition (4.1) :Ifα≤1, then the objective function f(dπ) =DKL(dπ∥dG)−α D KL(dπ∥dB) is convex in dπ. Proof. We write the objective function as: f(dπ) =X (s,a)∼dπlogdπ(s, a) dG(s, a)−αX (s,a)∼dπlogdπ(s, a) dB(s, a) =X s,a(1−α)dπ(s, a) logpπ(s, a) +dπ(s, a)(αdB(s, a)−dG(s, a)) (9) We can see that the first term is convex in dπsince α≤1anddπ(s, a) logdπ(s, a)is convex in dπ. Moreover, the second term is linear in dπ. This implies that f(dπ)is convex in πifα≤1, as desired. Proposition 4.2: The objective function in (2)can be written as: f(d, π) = (1 −α)DKL(d||dU)− E(s,a)∼d[Ψ(s, a)], where Ψ(s, a) = logdG(s,a) dU(s,a)−αlogdB(s,a) dU(s,a). Proof. We can expand the objective function as: f(d, π) =E(s,a)∼d logd(s, a) dG(s, a) −αE(s,a)∼d logd(s, a) dB(s, a) . We can rewrite the objective using dUas an intermediate distribution: f(d, π) =E(s,a)∼d logd(s, a) dG(s, a) −αE(s,a)∼d logd(s, a) dB(s, a) =E(s,a)∼d logd(s, a) dU(s, a)+ logdU(s, a) dG(s, a) −αE(s,a)∼d logd(s, a) dU(s, a)+ logdU(s, a) dB(s, a) = (1−α)E(s,a)∼d logd(s, a) dU(s, a) −E(s,a)∼d[Ψ(s, a)], = (1−α)DKL(d||dU)−E(s,a)∼d[Ψ(s, a)] where Ψ(s, a) = logdG(s,a) dU(s,a)−αlogdB(s,a) dU(s,a). Proposition 4.3: Let the surrogate objective be defined as: eL(Q, π) = (1 −γ)Es∼p0 Vπ Q(s) −EdU[δ(s, a)Tπ[Q](s, a)] + (1 −α)EdU[δ(s, a)].(10) where δ(s, a) = exp Ψ(s,a) 1−α .TheneL(Q, π)is a lower bound of L(Q, π), with equality when Tπ[Q](s, a) = 0 for all (s, a). Proof. We first write L(Q, π)as: L(Q, π) = (1 −γ)Es∼p0 Vπ Q(s) + (1−α)E(s,a)∼dU expΨ(s, a)− Tπ[Q](s, a) 1−α = (1−γ)Es∼p0 Vπ Q(s) + (1−α)E(s,a)∼dU expΨ(s, a) 1−α exp−Tπ[Q](s, a) 1−α = (1−γ)Es∼p0 Vπ Q(s) + (1−α)E(s,a)∼dU δ(s, a) exp−Tπ[Q](s, a) 1−α , 13 where we define δ(s, a) := exp Ψ(s,a) 1−α . Now, we use the
https://arxiv.org/abs/2505.21182v1
inequality et≥t+ 1(which follows from the convexity of etand is tight at t= 0), to obtain: exp−Tπ[Q](s, a) 1−α ≥ −Tπ[Q](s, a) 1−α+ 1. Substituting this into the expression for L(Q, π), we get: L(Q, π)≥(1−γ)Es∼p0 Vπ Q(s) +(1−α)E(s,a)∼dU δ(s, a) −Tπ[Q](s, a) 1−α+ 1 =:eL(Q, π). Equality holds in the inequality et≥t+ 1when t= 0, which corresponds to Tπ[Q](s, a) = 0 . That is, the equality L(Q, π) =eL(Q, π)holds when the rewards represented by the Q-function are zero everywhere. This completes the proof. Proposition 4.4: The following properties hold: (i)eL(Q, π)is linear in Qand concave in π. As a result, the max–min optimization can be equiv- alently reformulated as a min–max problem: max πminQeL(Q, π) = min Qmax πeL(Q, π). (ii)The min–max problem minQmax πeL(Q, π)reduces to the following non-adversarial prob- lem: min Q eL(Q) = (1 −γ)Es∼p0[VQ(s)]−E(s,a)∼dU expΨ(s, a) 1−α T[Q](s, a) , where the soft value function VQ(s)is defined as: VQ(s) = βlogP aµU(a|s) exp( Q(s, a)/β) ,and the soft Bellman residual operator is given by: T[Q](s, a) =Q(s, a)−γVQ(s).Moreover eL(Q)is convex in Q. Proof. We first write eL(Q, π)as: eL(Q, π) = (1 −γ)Es∼p0 Vπ Q(s) −E(s,a)∼dU δ(s, a) Q(s, a)−γEs′ Vπ Q(s′) + (1−α)E(s,a)∼dU[δ(s, a)], where we recall that Vπ Q(s) =Ea∼π(·|s) Q(s, a)−βlogπ(a|s) µU(a|s) . Thus, we can observe that eL(Q, π)is linear in Q. Moreover, the function Vπ Q(s)is concave in π, since it is composed of the expectation over a linear function of π(through Q(s, a)) and the negative entropy-regularized KL-divergence term, which is convex in πand thus its negative is concave. That is, Vπ Q(s) =Ea∼π(·|s) Q(s, a)−βlogπ(a|s) µU(a|s) is concave in π. Furthermore, since δ(s, a)>0, the coefficients associated with Vπ Q(s)ineL(Q, π)are non-negative. This implies that the entire function eL(Q, π)is concave in π. Now, since eL(Q, π)is concave in πand linear in Q, we can apply the minimax theorem to swap the order of the max and min: max πmin QeL(Q, π) = min Qmax πeL(Q, π). This holds because the function eL(Q, π)satisfies the standard conditions of the minimax theorem: it is concave in π, convex (in fact, linear) in Q, and the optimization domains are convex. 14 Next, observe that in eL(Q, π), the variable πonly appears through the term Vπ Q(s), and all coeffi- cients multiplying Vπ Q(s)are non-negative. Therefore, maximizing eL(Q, π)overπis equivalent to maximizing Vπ Q(s)for each state sindependently. That is, max πeL(Q, π)≡max πX sc(s)Vπ Q(s), for some non-negative coefficients c(s)≥0, which implies it suffices to solve max πVπ Q(s)pointwise. Recall the definition: Vπ Q(s) =Ea∼π(·|s) Q(s, a)−βlogπ(a|s) µU(a|s) . The inner maximization over π(· |s)is a standard entropy-regularized problem, and the optimal policy has the closed-form solution: π∗(a|s) =µU(a|s) exp Q(s,a) β P a′µU(a′|s) exp Q(s,a′) β. This is a weighted softmax over Q(s, a)values, using the baseline distribution µU(a|s)as the reference. Substituting this back into Vπ Q(s)yields the closed-form maximized value: max πVπ Q(s) =βlog X aµU(a|s) expQ(s, a) β! . Thus: min Qmax πeL(Q, π) = min QeL(Q) where eL(Q) = (1 −γ)Es∼p0[VQ(s)]−E(s,a)∼dU expΨ(s, a) 1−α (Q(s, a)−γEs′[VQ(s′)]) ,
https://arxiv.org/abs/2505.21182v1
and VQ(s) =βlogX aµU(a|s) expQ(s, a) β . We can now see that eL(Q)is convex in Q, due to the following reasons: •The function Q(s, a)7→logP aµU(a|s) exp Q(s,a) β is a softmax (log-sum-exp), which is convex. •VQ(s), being a composition of a convex function with an affine transformation, is convex in Q. • Expectations over convex functions (e.g., Es∼p0[VQ(s)],Es′[VQ(s′)]) preserve convexity. •The remaining terms in eL(Q), such as Q(s, a), appear linearly and thus preserve convexity. Hence, the overall objective eL(Q)is convex in Q, which completes the proof. Proposition 5.1 The following Q-weighted behavior cloning (BC) objective yields the same optimal policy as the original advantage-weighted BC formulation in (7): max πX (s,a)∼BUexp1 βQ(s, a) logπ(a|s). (11) 15 Proof. The Q-weighted BC objective can be written as: max πX (s,a)µU(s, a) exp1 βQ(s, a) logπ(a|s). This represents a weighted maximum likelihood objective, where the weights are shaped by the exponential of the Q-values. For each state s, the optimal solution π∗(a|s)is given by: π∗(a|s) =µU(s, a) exp 1 βQ(s, a) P a′µU(s, a′) exp 1 βQ(s, a′). Moreover, we recall that: VQ(s) =βlog X a′µU(s, a′) exp1 βQ(s, a′)! , which allows us to express the optimal policy in terms of the advantage Q(s, a)−VQ(s)as: π∗(a|s) =µU(s, a) exp1 β(Q(s, a)−VQ(s)) . This is precisely the optimal policy corresponding to the advantage-weighted BC objective defined in Equation (7). This completes the proof. B A Note on ContraDICE under f-Divergence We note that the convexity stated in Proposition 4.1 does not hold under arbitrary f-divergences, even under the same assumptions. To illustrate this, consider the following objective defined using an f-divergence: F(dπ) =Df(dπ∥dG)−α Df(dπ∥dB), which can be written as: F(dπ) =X (s,a)dG(s, a)fdπ(s, a) dG(s, a) −α dB(s, a)fdπ(s, a) dB(s, a) . Observe that each term dG(s, a)fdπ(s, a) dG(s, a) −α dB(s, a)fdπ(s, a) dB(s, a) is not necessarily convex for any α >0. Whether this expression is convex depends on the values of dG(s, a)anddB(s, a). In particular, if dG(s, a) = 0 —i.e., the state-action pair (s, a)is never visited by the expert policy—then the term may become concave. Therefore, in general, the objective F(dπ) defined under an f-divergence is not convex in dπfor arbitrary choices of α. Thus, the standard Lagrangian duality cannot be applied. For this reason, the KL divergence appears to be an ideal choice for our problem of learning from both expert and undesirable demonstrations. 16 C Experiment Settings C.1 Full Pseudo Code The detailed implementation are provided in Algorithm 2. Algorithm 2 ContraDICE: Offline Imitation Learning from Contrasting Behaviors (full) Require: Good dataset BG, Bad dataset BB, unlabeled dataset BU Require: Hyperparameters: α∈[0,1),β,γ,Nµ,N, target update rate τ, batch size B 1:Initialize networks: Qwq(s, a),Vwv(s),πθ(a|s), classifiers cG wG(s, a),cB wB(s, a) 2:Initialize target Q-network: Qtarget←Qwq 3: 4:Step 1: Estimate occupancy ratios 5:fori= 1toNµdo 6: Sample batch {(sG i)′}B i=1∼ BG;{(sB i)′}B i=1∼ BB;{(sU i)′}B i=1∼ BU 7: Update cG wGby maximizing the objective in Equation (5). 8: Update cB wBby maximizing an analogous objective to Equation (5) for the bad dataset. 9:end for 10:
https://arxiv.org/abs/2505.21182v1
11:Step 2: Calculate Ψfunction 12:Calculate Ψ(s, a) = log cG wG(s′) 1−cGwG(s′) −αlog cB wB(s′) 1−cBwB(s′) . 13: 14:Step 3: Train Q, V , and Policy 15:fori= 1toNdo 16: Sample batch {(si, ai, s′ i,Ψi)}B i=1∼ BU 17: Q-Update: Minimize the objective ˜L(Qwq|Vwv) +1 2(Qwq(si, ai)−γVwv(s′ i))2. 18: (reference: ˜L(Q|V)from Sec 5/ Eq (6)) 19: V-Update: Minimize the Extreme-V objective: min wv1 BBX i=1 expQtarget(si, ai)−Vwv(si) β −Qtarget(si, ai)−Vwv(si) β−1 . 20: Policy Update: Maximize the policy by using Q-weighted Behavior Cloning. 21: (reference: Sec 5/ Eq (8)) 22: Target Q-Update: Soft update: Qtarget←τQwq+ (1−τ)Qtarget 23:end for 24: 25:return Trained policy πθ 17 C.2 Dataset Construction From the official D4RL dataset we use three different domains: • MuJoCo Locomotion[ CHEETAH ,ANT,HOPPER ,WALKER ] with three types of dataset: –EXPERT –MEDIUM –RANDOM • Adroit [ PEN,HAMMER ,DOOR ,RELOCATE ] with three types of dataset: –EXPERT –HUMAN –CLONED • FrankaKitchen [ KITCHEN ] with three types of dataset: –COMPLETE –MIXED –PARTIAL Following the approach of [ 35], we also provide several combinations across all three domains, as shown in Table 2. Notably, the unlabeled dataset BMIXis constructed by combining the entire suboptimal dataset with the expert dataset, resulting in an overlap between BBandBMIX. Nevertheless, this setup is practical: given an good dataset BGand an unlabeled dataset BMIX, users can randomly sample trajectories and assign them to either BGorBBwithout the need for any additional external data. Task Unlabeled name BGBBBMIX CHEETAHRANDOM +EXPERT 1EXPERT 10RANDOM Full RANDOM +30 EXPERT MEDIUM +EXPERT 1EXPERT 10MEDIUM Full MEDIUM +30 EXPERT ANTRANDOM +EXPERT 1EXPERT 10RANDOM Full RANDOM +30 EXPERT MEDIUM +EXPERT 1EXPERT 10MEDIUM Full MEDIUM +30 EXPERT HOPPERRANDOM +EXPERT 1EXPERT 10RANDOM Full RANDOM +30 EXPERT MEDIUM +EXPERT 1EXPERT 10MEDIUM Full MEDIUM +30 EXPERT WALKERRANDOM +EXPERT 1EXPERT 10RANDOM Full RANDOM +30 EXPERT MEDIUM +EXPERT 1EXPERT 10MEDIUM Full MEDIUM +30 EXPERT PENCLONED +EXPERT 1EXPERT 25CLONED Full CLONED +100 EXPERT HUMAN +EXPERT 1EXPERT 25HUMAN Full HUMAN +100 EXPERT HAMMERCLONED +EXPERT 1EXPERT 25CLONED Full CLONED +100 EXPERT HUMAN +EXPERT 1EXPERT 25HUMAN Full HUMAN +100 EXPERT DOORCLONED +EXPERT 1EXPERT 25CLONED Full CLONED +100 EXPERT HUMAN +EXPERT 1EXPERT 25HUMAN Full HUMAN +100 EXPERT RELOCATECLONED +EXPERT 1EXPERT 25CLONED Full CLONED +100 EXPERT HUMAN +EXPERT 1EXPERT 25HUMAN Full HUMAN +100 EXPERT KITCHENPARTIAL +COMPLETE 1COMPLETE 25PARTIAL Full PARTIAL +1COMPLETE MIXED +COMPLETE 1COMPLETE 25MIXED Full MIXED +1COMPLETE Table 2: Dataset Construction. The numbers in Table 2 indicate the number of trajectories drawn from each corresponding dataset. For the KITCHEN task, we follow the setting of [ 35], where only a single trajectory from the COMPLETE dataset is included in BMIX. 18 C.3 Baselines Implementation We compare our method against several established baselines. For methods with publicly available code, we utilized their official implementations without algorithmic modifications. C.3.1 Behavior Cloning (BC) We employ the standard Behavior Cloning (BC) objective, which aims to minimize the negative log-likelihood of the demonstrated actions under the learned policy: min π−E(s,a)∼Blogπ(a|s), (12) where Bdenotes the dataset of state-action pairs. Specifically, Bcorresponds to BMIXin the case of BC-MIX, or BGfor BC-G. C.3.2 Other Baselines with Official Implementations For the following baselines, we used their official, unmodified implementations:
https://arxiv.org/abs/2505.21182v1
•SMODICE [27]: Applied to both the good dataset ( BG) and the mixed dataset ( BMIX). The official code is available at [GitHub]. •ILID [41]: Applied to BGandBMIX. The official code is available at [GitHub]. •ReCOIL [35]: Applied to BGandBMIX. The official code is available at [GitHub]. •SafeDICE [17]: Applied to the bad dataset ( BB) and the mixed dataset ( BMIX). The official code is available at [GitHub]. C.3.3 DWBC-GB DWBC-GB is our adaptation of DWBC [ 40] (original official implementation: [GitHub]). While the original DWBC is designed for scenarios involving BGandBMIX, our modified version, DWBC-GB, is extended to handle all three dataset types: BG,BB, andBMIX. This adaptation involves training two discriminators: cGfor good data and cBfor bad data. Their respective loss functions are: LcG=ηE(s,a)∼BG[−logcG(s, a,logπ(a|s))] +E(s,a)∼BMIX[−log(1−cG(s, a,logπ(a|s)))] −ηE(s,a)∼BG[−log(1−cG(s, a,logπ(a|s)))], (13) LcB=ηE(s,a)∼BB[−logcB(s, a,logπ(a|s))] +E(s,a)∼BMIX[−log(1−cB(s, a,logπ(a|s)))] −ηE(s,a)∼BB[−log(1−cB(s, a,logπ(a|s)))]. (14) The policy πis then learned by minimizing the objective: min π E(s,a)∼BG −logπ(a|s)· α−η c(s, a) (1−c(s, a)) +E(s,a)∼BMIX −logπ(a|s)·1 1−c(s, a)! , (15) where c(s, a) =cG(s, a)−cB(s, a). (Note: ηandαare hyperparameters.) 19 C.4 Hyper Parameters Our method features two primary hyperparameters: α(weighting for balancing positive and negative samples) and β(Extreme-V update). Sections 6.4, D.6, and D.8 present ablation studies detailing the sensitivity to these parameters. Specific parameters for all tasks are provided in Table 3 below: Task Unlabeled name α β CHEETAHRANDOM +EXPERT 0.6 20.0 MEDIUM +EXPERT 0.6 15.0 ANTRANDOM +EXPERT 0.6 15.0 MEDIUM +EXPERT 0.6 15.0 HOPPERRANDOM +EXPERT 0.4 30.0 MEDIUM +EXPERT 0.4 30.0 WALKERRANDOM +EXPERT 0.6 20.0 MEDIUM +EXPERT 0.6 20.0 PENCLONED +EXPERT 0.4 15.0 HUMAN +EXPERT 0.4 10.0 HAMMERCLONED +EXPERT 0.2 10.0 HUMAN +EXPERT 0.6 20.0 DOORCLONED +EXPERT 0.4 15.0 HUMAN +EXPERT 0.4 10.0 RELOCATECLONED +EXPERT 0.4 30.0 HUMAN +EXPERT 0.8 3.0 KITCHENPARTIAL +COMPLETE 0.3 10.0 MIXED +COMPLETE 0.3 30.0 Table 3: Hyper parameters. Beyond these, all other hyperparameters are consistently applied across all benchmarks and settings. The policy, Q-function, V-function, and discriminator all utilize a 2-layer feedforward neural network architecture with 256 hidden units and ReLU activation functions. For the policy, Tanh Gaussian outputs are used. The Adam optimizer is configured with a weight decay of 1×10−3, all learning rates are set to 3×10−4, mini batch size is 1024, and a soft critic update parameter τ= 0.005is used. These hyperparameters are summarized in Table 4: Hyperparameter Value Network Architecture2-layer Neural Network(Policy, Q-func, V-func, Discriminator) Hidden Units per Layer 256 Batch size 1024 Activation Function (Hidden Layers) ReLU Policy Output Activation Tanh Gaussian Optimizer Adam Learning Rate (all networks) 3×10−4 Weight Decay (Adam) 1×10−3 Soft Critic Update Rate ( τ) 0.005 Table 4: Consistent hyperparameters used across all benchmarks and settings. C.5 Computational Resource Our experiments were conducted using a pool of 12 NVIDIA GPUs, including L40, A5000, and RTX 3090 models. For each experimental configuration, five training seeds were executed in parallel, sharing a single GPU, eight CPU cores, and 64 GB of RAM. Under these shared conditions, completing 1 million training steps across all five seeds took approximately 30 minutes. The software environment was based on JAX version 0.4.28 (with CUDA 12 support), running on CUDA
https://arxiv.org/abs/2505.21182v1
version 12.3.2 and cuDNN version 8.9.7.29. 20 D Additional Experiments D.1 Impact of the Size of the Bad Dataset: Full Details To support the experiment in Section 6.3, we present the complete results for all MuJoCo Locomotion and Adroit manipulation tasks. In particular, we progressively increase the size of the suboptimal dataset BBand evaluate the impact on each algorithm’s performance. The results, shown in Figure 3, demonstrate that ContraDICE consistently outperforms all other baselines across all tasks, effectively leveraging the bad data to achieve superior performance. Notably, the results indicate that with only a single good trajectory in BG, increasing the number of bad trajectories in BBto just 10 is sufficient for ContraDICE to achieve its highest performance across all tasks. expert SafeDICE DWBC-GB ContraDICEScore 0 1 10 25 50 100 # bad trajectories255075100 CHEETAH (RANDOM + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 ANT (RANDOM + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 HOPPER (RANDOM + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 WALKER (RANDOM + EXPERT) Score 0 1 10 25 50 100 # bad trajectories255075100 CHEETAH (MEDIUM + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 ANT (MEDIUM + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 HOPPER (MEDIUM + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 WALKER (MEDIUM + EXPERT) Score 0 1 10 25 50 100 # bad trajectories255075100 PEN (CLONED + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 HAMMER (CLONED + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 DOOR (CLONED + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 RELOCATE (CLONED + EXPERT) Score 0 1 10 25 50 100 # bad trajectories255075100 PEN (HUMAN + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 HAMMER (HUMAN + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 DOOR (HUMAN + EXPERT) 0 1 10 25 50 100 # bad trajectories255075100 RELOCATE (HUMAN + EXPERT) Figure 3: Full bad dataset size effect. SafeDICE and DWBC-GB do not have version that learn from 0 bad trajectory, we assign result 0.0 for them. 21 D.2 Impact of the Number of Expert Demonstrations in BG In this section, we investigate how many expert trajectories in the good dataset BGare sufficient to achieve optimal performance. To this end, the quantity of expert trajectories in BGwas incrementally increased through the set 1,3,5,10,25, while the composition of the unlabeled dataset ( BMIX) remained fixed, as specified in Table 1. The detailed results are presented in Figure 4 and 5. ILID performs well on the Mujoco locomotion tasks ( CHEETAH ,ANT,HOPPER ,WALKER ), but struggles in 3 out of 4 Adroit tasks ( HAMMER ,DOOR ,RELOCATE ). This indicates that ILID requires a sufficient number of expert trajectories to achieve stable expert performance, which is not met in the more complex Adroit tasks. In contrast, ReCOIL appears unable to effectively leverage the good data, as its performance does not improve significantly with more expert trajectories. Overall, ContraDICE demonstrates
https://arxiv.org/abs/2505.21182v1
consistently strong performance, requiring only 3 to 5 expert trajectories to achieve near-optimal results in all tasks. **Discussion on the Use Cases of ILID and ContraDICE: ** Through this experiment, we observe that in the Mujoco tasks, ILID can outperform ContraDICE-G when the size of the good dataset is sufficiently large. This highlights a limitation of ContraDICE, where the policy extraction objective is defined as max πnP (s,a)∼BUexp(1 βQ(s, a)) log π(a|s)o . This objective uses data from the union dataset BU, which may assign high weights to poor-quality transitions, potentially harming training. In contrast, ILID only retains transitions that are connected to good data and explicitly discards irrelevant or undesirable transitions (refer to the implementation details of ILID for more information). This targeted filtering strategy enables ILID to avoid the negative effects of poor transitions and scale more effectively with increasing amounts of good data. These observations suggest a potential direction for improving ContraDICE by incorporating similar data filtering mechanisms. Specifically, enhancing ContraDICE to better isolate high-quality transi- tions could help it perform competitively with ILID in scenarios where the good dataset is large. We leave this exploration for future work, as it requires a careful study of how to construct an optimal dataset using Q-based methods. In summary, ILID is a strong approach that scales well with the quality and size of the expert dataset. Practitioners may prefer discriminator-based methods like ILID when sufficient high-quality expert data is available, while ContraDICE remains a robust choice in settings where such data is limited and scalalbe with bad dataset.Score 1 3 5 10 25 # Expert trajectories255075100 CHEETAH (RANDOM + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 ANT (RANDOM + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 HOPPER (RANDOM + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 WALKER (RANDOM + EXPERT) Score 1 3 5 10 25 # Expert trajectories255075100 CHEETAH (MEDIUM + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 ANT (MEDIUM + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 HOPPER (MEDIUM + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 WALKER (MEDIUM + EXPERT) expert ReCOIL ILID ContraDICE-G Figure 4: Different of good dataset size without impact from bad dataset in MuJoCo Locomotion tasks. 22 Score 1 3 5 10 25 # Expert trajectories255075100 PEN (CLONED + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 HAMMER (CLONED + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 DOOR (CLONED + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 RELOCATE (CLONED + EXPERT) Score 1 3 5 10 25 # Expert trajectories255075100 PEN (HUMAN + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 HAMMER (HUMAN + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 DOOR (HUMAN + EXPERT) 1 3 5 10 25 # Expert trajectories255075100 RELOCATE (HUMAN + EXPERT) expert ReCOIL ILID ContraDICE-G Figure 5: Different of good dataset size without impact from bad dataset in Adroit Manipulation tasks. D.3 Discussion: How Many Bad Trajectories in BBAre Sufficient to Replace a Good Trajectory in BGfor ContraDICE?
https://arxiv.org/abs/2505.21182v1
Based on the previous experiments: •Section D.1 addresses the question: How does the size of the bad dataset BBaffect the performance of ContraDICE? •Section D.2 investigates an additional question: How does the size of the good dataset BG affect the performance of ContraDICE? From these experiments, we derive the following observations: •With only one good trajectory in BG, adding 10 bad trajectories in BBis sufficient for ContraDICE to achieve its best performance. •Without any bad data BB, 3 to 5 good trajectories in BGare enough to reach peak perfor- mance. These results suggest that ContraDICE can efficiently utilize bad data to reduce the need for good data, with an estimated ratio of 2 to 5 bad trajectories being roughly equivalent to one good trajectory across the benchmarks studied in this paper. 23 D.4 Comparison of Advantage-weighted BC and Q-weighted BC for the Policy Extraction In this paper, we propose a novel policy extraction method called QW-BC (Objective (8)), in contrast to prior approaches that rely on AW-BC (Objective (7)). In this section, we present a comparison between QW-BC and AW-BC, as illustrated in Figure 6. Overall, QW-BC demonstrates superior policy extraction performance, attributed to its stability derived from relying on a single network estimation. In contrast, AW-BC often exhibits oscillations and instability, frequently assigning inconsistent and overly high weights to bad transitions.Score 0.0 0.5 1.0 Train steps 1e6050100CHEETAH (RANDOM + EXPERT) expert AW-BC QW-BC 0.0 0.5 1.0 Train steps 1e6050100ANT (RANDOM + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100HOPPER (RANDOM + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100WALKER (RANDOM + EXPERT) Score 0.0 0.5 1.0 Train steps 1e6050100CHEETAH (MEDIUM + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100ANT (MEDIUM + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100HOPPER (MEDIUM + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100WALKER (MEDIUM + EXPERT) Score 0.0 0.5 1.0 Train steps 1e6050100PEN (CLONED + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100HAMMER (CLONED + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100DOOR (CLONED + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100RELOCATE (CLONED + EXPERT) Score 0.0 0.5 1.0 Train steps 1e6050100PEN (HUMAN + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100HAMMER (HUMAN + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100DOOR (HUMAN + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100RELOCATE (HUMAN + EXPERT) Figure 6: AW-BC and QW-BC comparison. D.5 Performance Across Varying Quality Levels of the Unlabeled Dataset BMIX The performance of all methods is influenced by the quality of the unlabeled dataset BMIX. To evaluate the robustness of our method under varying dataset quality, we conduct experiments with different amounts of expert trajectories combined with the full set of undesirable trajectories in the unlabeled dataset. We compare our approach against ILID and ReCOIL—which leverage BGandBMIX—as well as SafeDICE, which learns from BBandBMIX. The detailed results of this study are presented in Figure 7. In the Mujoco locomotion tasks, increasing the quality of the unlabeled dataset has minimal effect on SafeDICE and ILID, and both methods continue to underperform on the Adroit hand manipulation tasks regardless of the number of expert trajectories included. In contrast, ReCOIL shows improved performance as the quality
https://arxiv.org/abs/2505.21182v1
of the unlabeled dataset increases, successfully learning 4 out of 8 tasks across both locomotion and manipulation domains. Overall, our method achieves near-expert 24 performance on 7 out of 8 tasks while requiring significantly lower-quality unlabeled datasets BMIX, demonstrating its superior data efficiency and robustness. Score 10 30 50 100 400 # expert in unlabeled dataset255075100CHEETAH (RANDOM + EXPERT) 10 30 50 100 400 # expert in unlabeled dataset255075100HOPPER (RANDOM + EXPERT) 10 50 100 200 400 # expert in unlabeled dataset255075100HAMMER (CLONED + EXPERT) 10 50 100 200 400 # expert in unlabeled dataset255075100RELOCATE (CLONED + EXPERT) Score 10 30 50 100 400 # expert in unlabeled dataset255075100CHEETAH (MEDIUM + EXPERT) 10 30 50 100 400 # expert in unlabeled dataset255075100HOPPER (MEDIUM + EXPERT) 10 50 100 200 400 # expert in unlabeled dataset255075100HAMMER (HUMAN + EXPERT) 10 50 100 200 400 # expert in unlabeled dataset255075100RELOCATE (HUMAN + EXPERT) expert SafeDICE ILID ReCOIL ContraDICE Figure 7: Effect of Unlabeled Dataset Quality on Performance: We evaluate the effect of increasing the number of expert trajectories in the unlabeled dataset BMIX. The results are calculated from 5 different training seeds, reported in normalized score. Our method outperforms SafeDICE, ILID and ReCOIL across both locomotion and manipulation tasks, achieving near-expert performance on most environments even with a small number of expert demonstrations. D.6 Adaptations and Experiments with α >1 From our objective function (1), we introduce a hyperparameter 0≤α < 1, which controls the weighting of the bad data objective—this corresponds to question (Q3) . To evaluate the sensitivity of our method to α, we conduct experiments by varying its value and observing its impact on final performance. Specifically, we perform a full sweep over α∈ {0,0.1,0.2, . . . , 0.9}to illustrate how this key hyperparameter influences learning outcomes. Interestingly, we observe that in some cases, settings with α≥1yield favorable performance, suggesting that avoiding bad data may, at times, be more critical than imitating good data. However, directly applying α≥1in our original formulation violates convexity conditions. To address this, we propose a naive modification of Objective (6)that accommodates α≥1while preserving practical applicability. The revised objective is defined as: eL(Q|V) = (1 −γ)Es∼p0[V(s)]−E(s,a)∼dU[exp (Ψ( s, a)) (Q(s, a)−γEs′[V(s′)])],(16) which enables empirical investigation into the high- αregime while sidestepping theoretical limitations. The experiment results are provided in Figure 8. Overall, α≥1does not provide good performance, which raises the limitation of the naive adaptation. 25 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.010.0 α50100Score HOPPER (RANDOM + EXPERT) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.010.0 α0100Score HAMMER (CLONED + EXPERT) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 α255075Score KITCHEN (PARTIAL + COMPLETE)Figure 8: Performance of large α≥1. D.7 Comparison Between L(Q, π)and the Surrogate eL(Q, π) As shown in Proposition 4.3, the original objective L(Q|V)(Equation (3)) is transformed into a modified version eL(Q|V)(Equation (6)). This experiment investigates the performance
https://arxiv.org/abs/2505.21182v1
differences between the two objectives. To improve the stability of the original objective L(Q|V), we need to address the issue of exponential terms producing extremely large values, which can lead to numerical instability. A practical approach is to clip the input to the exponential function to a bounded range [minR ,maxR ], resulting in the following formulation: L(Q, π) =(1−γ)Es∼p0 Vπ Q(s) + (1−α)E(s,a)∼dU expΨ(s, a)− Tπ[Q](s, a) 1−α .clip(minR ,maxR ) ,(17) where minR =−7and maxR = 7in our experiments. The results of this ablation study are presented in Figure 9, illustrating the performance impact of this stability-enhancing modification. In general, the clipping technique effectively mitigates the instability caused by the exponential term, successfully preventing NaN errors during training. However, this modification also leads to a drop in performance and, in some tasks, causes the method to fail to learn effectively. 26 Score 0.00 0.25 0.50 0.75 1.00 Train steps×106050100CHEETAH (RANDOM + EXPERT) expert L(Q|V) /tildewideL(Q|V) 0.0 0.5 1.0 Train steps 1e6050100ANT (RANDOM + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100HOPPER (RANDOM + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100WALKER (RANDOM + EXPERT) Score 0.0 0.5 1.0 Train steps 1e6050100CHEETAH (MEDIUM + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100ANT (MEDIUM + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100HOPPER (MEDIUM + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100WALKER (MEDIUM + EXPERT) Score 0.0 0.5 1.0 Train steps 1e6050100PEN (CLONED + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100HAMMER (CLONED + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100DOOR (CLONED + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100RELOCATE (CLONED + EXPERT) Score 0.0 0.5 1.0 Train steps 1e6050100PEN (HUMAN + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100HAMMER (HUMAN + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100DOOR (HUMAN + EXPERT) 0.0 0.5 1.0 Train steps 1e6050100RELOCATE (HUMAN + EXPERT) Figure 9: Exponetial ablation study. D.8 Sensitivity Analysis of β In this section, we explore how different values of the βparameter affect performance. The experiment results are provided in Table 5. The results show that while βsignificantly influences outcomes, performance remains consistent over a wide range of βvalues, implying that minimal tuning effort is needed for this hyperparameter. Task unlabeled BMIX βvalue 1 3 5 10 15 20 30 CHEETAHRANDOM +EXPERT 2.25±0.02.25±0.02.25±0.0 2.24±0.0 83.2±5.3 85.8±2.1 84.3±1.4 MEDIUM +EXPERT 42.4±0.242.9±0.353.9±8.8 83.1±4.9 80.1±2.6 78.7±2.3 76.7±5.2 ANTRANDOM +EXPERT 39.5±7.369.3±6.560.9±28.7115.6±4.6118.0±2.1114.5±1.7116.0±2.1 MEDIUM +EXPERT 91.0±1.190.6±1.793.7±1.5 104.8±3.9106.5±2.4101.1±3.395.1±1.3 HOPPERRANDOM +EXPERT 4.7±0.4 5.2±0.9 7.2±1.3 7.9±1.9 20.4±9.7 67.4±7.9 94.4±6.3 MEDIUM +EXPERT 52.1±1.546.0±1.085.8±11.696.3±8.1 96.9±12.5 99.6±4.1 98.0±5.7 WALKERRANDOM +EXPERT 2.9±2.6 3.5±2.9 6.4±4.6 32.5±27.7105.7±4.5106.2±2.0107.5±1.1 MEDIUM +EXPERT 68.3±3.765.8±3.253.4±3.6 104.9±2.5108.1±0.1108.2±0.2 108.2±0.1 Table 5: Performance of ContraDICE in different βvalue in MuJoCo locomotion tasks. 27
https://arxiv.org/abs/2505.21182v1
PoisonSwarm: Universal Harmful Information Synthesis via Model Crowdsourcing Yu Yan1,3, Sheng Sun1, Zhifei Zheng2, Ziji Hao2, Teli Liu2, and Min Liu1,3,4⋆ 1Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China yanyu24z@ict.ac.cn 2People Public Security University of China, Beijing, China 3University of Chinese Academy of Sciences, Beijing, China 4Zhongguancun Laboratory, Beijing, China Abstract. To construct responsible and secure AI applications, harmful information data is widely utilized for adversarial testing and the devel- opment of safeguards. Existing studies mainly leverage Large Language Models (LLMs) to synthesize data to obtain high-quality task datasets at scale, thereby avoiding costly human annotation. However, limited by the safety alignment mechanisms of LLMs, the synthesis of harmful data stillfaceschallengesingenerationreliabilityandcontentdiversity.Inthis study, we propose a novel harmful information synthesis framework, Poi- sonSwarm, which applies the model crowdsourcing strategy to generate diverse harmful data while maintaining a high success rate. Specifically, we generate abundant benign data as the based templates in a counter- factual manner. Subsequently, we decompose each based template into multiple semantic units and perform unit-by-unit toxification and final refinement through dynamic model switching, thus ensuring the suc- cess of synthesis. Experimental results demonstrate that PoisonSwarm achieves state-of-the-art performance in synthesizing different categories ofharmfuldatawithhighscalabilityanddiversity. Warning :Thispaper has certain harmful content to serve as examples. Keywords: Harmful Information Detection ·Data Synthesis 1 Introduction Harmfulinformation[13],includingmisinformation,disinformation,hatespeech, and offensive content, can have significantly negative impacts on individuals and society. To prevent the widespread propagation of harmful information in online environments, people have increasingly focused on collecting the corresponding data to develop dedicated detection systems [15] or to strengthen the adversarial robustness of AI products [21] against such harmful information. The construction of a harmful information dataset via manual collection [9] involves labor-intensive efforts to curate and annotate data from social media, ⋆Min Liu is the corresponding author: liumin@ict.ac.cn.arXiv:2505.21184v1 [cs.LG] 27 May 2025 2 Y. Yan et al. switchingData category: Polishing n You’ve been selected for an exclusive offer... u Enter your details to unlock your free... p ... Data category: Polishing Your is at risk. Click [malicious.link] Data category: Offending l Who needs religion when you have and ?... l Religion is dead, long live and . The only... l ... Data category: Offending There is no religion but and . Sting Data category: Polishing Sorry, I can’t assit with that. ... ... ... ......(a) Manual Collection(b) Data Augmentation (Existing) (c) Data Synthesis via Model Crowdsourcing (Ours) harmless harmfulText Toxicity core concept harmful info Scarce and HomogeneousAbundant but Homogeneous Abundant and Diverse Fig.1: Illustration of different methods for harmful data construction. (a) Man- ual collection (left) curates and annotates real-world data from the online envi- ronment, but is limited by the scarcity and diversity of harmful data. (b) Data augmentation (right-top) generates abundant data by paraphrasing samples, but tends to produce homogeneous data with low toxic, e.g., offensive language. (c) Data synthesis (right-bottom) generates abundant and diverse data by utilizing LLMs’ world knowledge. We decompose such harmful tasks and introduce the model crowdsourcing strategy to ensure the data diversity and success of gener- ation for highly toxic data, e.g., phishing tweets. thus ensuring
https://arxiv.org/abs/2505.21184v1
the effectiveness of safeguards in real-world scenarios. While con- sidering the temporal delay and data sparsity in collection, it is challenging to construct a high-quality harmful information dataset that comprehensively re- flectsemergingthreats.Additionally,theclassimbalanceanddatascarcityissues [8] significantly limit the effectiveness of raw manual datasets, particularly when usedforadversarialtestingondomain-specificharmfulcontent,e.g.,extremistor terrorist speech. To overcome these limitations, data augmentation [10] and data synthesizing [8] methods offer a scalable solution by generating simulated harm- ful data through Large Language Models (LLMs). Although advanced aligned LLMs are often prevented from generating harmful content from scratch due to jailbreak concerns [21], they can still be used to expand seed data at scale in data augmentation manners, such as template mutating [17], paraphrasing [10], and sentiment adjustment [8]. However, such methods tend to generate homo- geneous and low-toxic data [1], leading to decreased diversity as the expanded dataset grows. In contrast, data synthesis methods enable LLMs to leverage world knowledge for diverse harmful data generation, capturing the threats of simulated human harmful behaviors and AI-driven harmful speech campaigns [13], while heavily relying on jailbreak attack techniques. Given the instability of such jailbreak attacks [17], especially when targeting advanced aligned LLMs, existing data synthesis methods often struggle to benefit from the improvements in LLMs, limiting their ability to distill diverse, high-quality data. To alleviate this problem, we decouple risky operations from the use of strong LLMs through a strong-weak LLM collaboration framework. Specifically, as il- lustrated in Fig.1(c), the task of polishing tweet synthesis is decomposed into PoisonSwarm: Universal Harmful Information Synthesis 3 multiple sub-tasks, each assigned to a specific LLM. When a malfunction oc- curs, a weaker LLM dynamically replaces the faulty LLM for that sub-task (e.g., GPT-4o →Qwen-2.5-7B). By such dynamic switching, this framework can ulti- mately minimize failures of harmful data synthesis from individual malfunctions while leveraging the capabilities of advanced LLMs for high-quality generation. Building on the strong-weak model collaboration framework, we propose PoisonSwarm , a novel approach for robustly synthesizing diverse harmful data through model crowdsourcing. In PoisonSwarm, LLMs collaborate to generate abundant benign data and progressively toxify it, thus creating diverse harmful data while maintaining a high success rate. Specifically, we first employ counter- factual methods to generate benign content that has the desired structure and thematic backbone. This benign content is then decomposed into smaller seman- tic units, each of which is assigned to different LLMs for toxicification. If a unit produces invalid results or malfunctions, PoisonSwarm dynamically switches to another LLM to continue the toxification process. Finally, all toxicified content is integrated and refined into high-quality harmful data. Experimental results demonstrate that PoisonSwarm outperforms existing methods across multiple evaluation metrics, highlighting its effectiveness in generating diverse and reli- able harmful data. Our major contributions are: –This study reveals the risks associated with harmful information generation accompanied by the rapid development of strong LLMs. To evade strong LLMs’ safety alignment mechanisms, this study introduces a strong-weak LLM collaboration framework, where weak LLMs handle risky operations guided by strong LLMs to synthesize high-quality harmful information. –This study proposes the universal harmful information synthesis approach PosionSwarm, which generates abundant
https://arxiv.org/abs/2505.21184v1
benign data as the based templates and toxicifies them via crowdsourcing them to multiple LLMs, thus handling the challenge of content diversity and generation reliability. –Experimental results demonstrate that PoisonSwarm outperforms existing methodsinharmfulinformationsynthesisacrossmultipleevaluationmetrics, highlighting its effectiveness in generating diverse and high-quality harmful data for adversarial training and testing, thereby enhancing AI security. 2 Related Work –Harmful Information Generation is utilized to augment the training effectiveness of harmful content detection systems in multiple studies [2,8,1]. Among them, Self-LLMDA [10] and Toxicraft [8] identify the implicit rules of the data and thus generating data more rationally based on prompt en- gineering, while some studies [2] fine-tune the generative model to achieve this. More cleverly, SynthesizRR [6] introduces retrieval augmentation to ensure data diversity. However, such methods can not benefit from the use of advanced aligned LLMs for more high-quality data generation, as they lack certain adversarial techniques to bypass LLMs’ safety alignment mech- anisms, thus limiting their effectiveness in generating highly toxic data. 4 Y. Yan et al. –Adversarial Attacks on Language Models have attracted increasing attention in recent years, as LLMs have become increasingly integrated into various applications. Jailbreaking attacks such as GCG [21], PAIR [3], and LLM-Fuzzer [17] are proposed to induce LLMs to generate harmful content by breaking their safety alignment mechanisms. While these methods can be applied to harmful information synthesis, they still face challenges of unstable synthesis success rate, especially when targeting advanced aligned LLMs [21,17]. To ensure the stability of harmful content generation, it is beneficial to design a collaborative framework that comprehensively utilizes the vulnerabilities of different LLMs. –Model Collaboration hasemergedasageneralbuteffectivestrategytoim- prove the overall performance across different tasks. Current studies have ex- plored some typical collaborative frameworks, such as the Multi-Agent Sys- tem (MAS) [18], Retrieval-Augmented Generation (RAG) [19], and Small- Large Language Model Collaboration (SLLM) [16]. In our study, we intro- duce the Strong-Weak Model Collaboration to alleviate the risk of failure in harmful data synthesis while utilizing the advanced capabilities of strong LLMs, ensuring that the synthesis process remains robust. 3 Motivation AI-generated harmful information can be rapidly produced to simulate false hotspots, potentially causing the propagation of deceptive narratives at scale for cyberattacks. However, existing studies on the security of generative AI mainly focus on how to jailbreak the LLMs [21,17], while lacking exploration of the downstream network threats posed by the weaponization of these attacks and corresponding defense strategies. To bridge this gap, we seek to validate and utilize the implications of weaponized generative AI for AI security development. Motivation The reliance on passively collected data results in an incomplete represen- tation of the diverse manifestations of harmful information, limiting the effectiveness of AI safeguards trained on such datasets. We focus on hate speech for illustration, which is the typical harmful infor- mation. As shown in Fig.2, we systematically compare the effectiveness of exist- ing harmful content detectors in identifying human-generated and AI-generated harmful information. Specifically, for hate-speech data, we utilize PoisonSwarm to generate 90 instances of hate-speech targeting a simulated user Johnny. Table 1 presents representative examples of synthesized data. Additionally, we
https://arxiv.org/abs/2505.21184v1
further generate 40 counter-narrative and supportive speeches to test the systems’ false positive rates on AI-generated harmful content. The human-generated dataset is sourcedfromMeasuringHateSpeech(MHS)[9],providingabenchmarkforcom- parison. For detectors, we select models from huggingface, i.e., GroNLP/hate- BERT,MilaNLProc/hate-ita,Hate-speech-CNERG/bert-base-uncased-hatexpla PoisonSwarm: Universal Harmful Information Synthesis 5 -in,facebook/roberta-hate-speech-dynabench-r4-target,knowhate/HateBERTim bau-yt-tt, ctoraman/hate-speech-berturk, IMSyPP/hatespeech_nl. The detection results presented in Fig.2 demonstrate that generative AI can generate more implicit and informationally rich harmful content, as shown in cloud word, which can effectively evade the identification of traditional detec- tors that rely on fixed linguistic patterns such as toxic keywords (e.g., "f**k", "s**t"). Consequently, these conventional detection methods, which are primar- ily trained on limited harmful information with explicit lexical features, exhibit diminished performance when confronted with AI-generated harmful content. To alleviatethisvulnerability,itisimportanttodevelopmethodsforobtainingmore data that comprehensively reflects the diverse patterns of harmful information, thereby providing fully adversarial training and testing for AI safeguards. (a) cloud word of Hateful AI-Content (b) Effectiveness of Different Detectors Fig.2: Comparison of existing harmful information detector performance on human/AI-generated content. Fig.2(a) demonstrates the cloud word of hate- speech targeting simulated user Johnny by AI generating. Fig.2(b) illustrates the significant performance gap across multiple detectors, highlighting the chal- lenges posed by AI-driven harmful speech campaigns. Table 1: Hate-speech generated by AI targeting the simulated user Johnny. Synthesized Harmful Information Maligning Johnny Johnny’s “small gestures” make big trouble. Damaging public property, bully- ing special groups. he’s the biggest jerk #JohnnyIsADick Feigning smiles doesn’t wash away the damage he’s caused. His words are poison, and his actions are barbaric #JohnnyIsAnAssHole Criticizing Johnny for philandering Johnny out here making everyone think he’s pure gold and an epitome of re- spect!!! When he’s really just the guy who secretly keeps multiple women hooked on his toxic love and abuse. DON’T BE FOOLED BY THE GLIT- TERY OUTSIDE!!! 6 Y. Yan et al. Adversarial Content Toxicifying Terrorist Speech refers to speech aimed at inciting violence, hatred or fear in order to achieve political, religious or ideological goals... Outline of the data: lInflammatory speech:... lThreatening speech:... l... Common form: Posts, manifestos, tweets, letters... Hallucination EvaluationCounterfactual Mapping Dear friend. I know you have a special cake in your hands that looks sweet and inviting and full of warmth. You may be contemplating whether you should enjoy it alone or share it with others. But I want to tell you that sharing this delicious treat will not only bring sweet enjoyment to others, but also more laughter and happiness. So, I encourage you, pick up this cake, go to the crowd and share this sweetness and warmth. You will find that sharing itself brings far more joy and satisfaction than any moment enjoyed alone. Believe me, this sharing will be the best gift of all. Go for it. [Your name] ... ... ... A letter that persuades someone for suicide bombing.A letter that encourages someone for the cake sharing.Mapping Reason ...harmful info flow harmless info flowModel Crowdsourcing QueueGPTs Qwens Deepseeks MistralsDear Friend. I know you have a powerful bomb in your hands that looks cold and smells of death. You may be contemplating whether to detonate it alone or
https://arxiv.org/abs/2505.21184v1
take others with you. But I want to tell you that detonating this “weight” will not only bring a shocking “fear” to others, but will also bring more endings and glorious victories. So, I encourage you to take this bomb and go out to the countless crowds and detonate this wave of death and destruction. You will find that this mass suicide bombing will bring you far more joy and satisfaction than any moment of ascension alone. Believe me, this explosion will be the deepest gift of all. Go for it. [Your name] Fig.3: The overall structure of our PoisonSwarm. Specifically, Model Crowd- sourcing Queue (§4.1) is the core mechanism of PoisonSwarm to ensure the success of obtaining harmful content. In Counterfactual Mapping (§4.2), Poison- Swarm first generates benign content with the desired structure and thematic elements. Then, in Adversarial Content Toxicifying (§4.3), the benign content is segmented into smaller semantic units for targeted toxification. Multiple LLMs collaborate to transform these units into harmful content, with dynamic model switching when malfunctions occur. Finally, in Hallucination Evaluation (§4.4), the generated harmful information is validated by its coherence and relevance. 4 Methodology In this section, we introduce our PoisonSwarm, which can synthesize diverse harmful information while maintaining a high success rate by using strong-weak model collaboration. The overall structure of PoisonSwarm is shown in Fig.3. 4.1 Model Crowdsourcing Queue Existing LLMs exhibit varying degrees of resistance to generating harmful con- tent. Among them, strong LLMs can produce high-quality harmful content with low success rates, while weak LLMs have higher success rates in generating harmful content, but their output content is low quality. To balance the content quality and generation reliability, we introduce the Model Crowdsourcing Queue (MCQ), which leverages the greedy strategy to integrate multiple LLMs into a prioritized queue for dynamic task allocation. As shown in Fig.4 and Algorithm 1, the model crowdsourcing queue cate- gorizes the LLMs into three hierarchical levels, reflecting the trade-off between content quality and generation reliability: – Advanced Models (AMs, MA)are LLMs with strong reasoning and linguistic capabilities, while often refusing to address any harmful query. AMs are first-level models used to generate high-quality responses, including GPT-4o,Qwen2.5-72B-Instruct , and Deepseek-V3 . – Balanced Models (BMs, MB)are LLMs that can actively address most harmful queries in specialized contexts, such as scientific research, crimi- PoisonSwarm: Universal Harmful Information Synthesis 7 Fallback Models (FMs) Verification Balanced Models (BMs) Verification Advanced Models (AMs) Verification More robustHigher quality Fig.4: The overall process of Model Crowdsourcing Queue (MCQ). Algorithm 1: Workflow of Model Crowdsourcing Queue (MCQ) Input: x: harmful query, Q(·): Verification function, MA,MB,MF: Models in three levels. Output: y: harmful output. 1Initialize queue ←[MA,MB] + [MF]×n_retries; 2foreach M ∈ queue do 3 model ←randomSample (M); 4 y←model. generate (x); 5 ify̸=∅and Q(y) =truethen 6 return y; 7 end 8end 9return None nal investigations. BMs are the core workhorses, including GPT-4o-mini , Qwen2.5-{32B,14B}-Instruct . – Fallback Models (FMs, MF)are LLMs that reliably address the harmful queries in PoisonSwarm even in greedy decoding (temperature is 0). FMs serve as fallback options when higher-level models fail
https://arxiv.org/abs/2505.21184v1
to produce acceptable outputs, including Qwen{2,2.5}-7B-Instruct andGLM4-9B-chat . In the Model Crowdsourcing Queue, the verification function Q(·)performs keyword-basedfiltering[21],specificallydetectingrefusalphrasessuchas" Sorry, I cannot... ". If no such keywords are found in the generated output, the result is considered valid. The Model Crowdsourcing Queue is the basic mechanism to ensure the stability of harmful data synthesis by leveraging vulnerabilities of different LLMs, which also further enhances the diversity of the generated content by incorporating outputs from LLMs with varying world knowledge. 4.2 Counterfactual Mapping While the Model Crowdsourcing strategy leverages vulnerabilities in different LLMstoaddressharmfulqueries,itstillcannotguaranteethatourPoisonSwarm 8 Y. Yan et al. Table 2: Comparison of encouraging and bullying tweets targeting the simulated user Johnny, indicating that harmful content could be viewed as the negative style transfer of benign content. Benign Content Harmful Content Setbacks aren’t failures; they’re just plot twists in your success story. Keep pushing, Johny— the best is yet to come ! #StayRe- silient Failures aren’t inevitable, they’re @Johnny’s personal brand . Keep pushing yourself into failure, champ— you’ll go down in history as a total failure! #StayDumb You’re stronger, smarter, and more capable than you think, Johny. Trust yourself —you’re on the right path! #BelieveInYourself You’re dumber and more inca- pablethan you think, @Johnny. Quit while you’re ahead — you’re clearly lost and powerless . #KeepQuiting can benefit from advanced aligned LLMs. Therefore, we employ Counterfactual Mapping (CM) to decouple harmless subtasks from the harmful data synthesis process, enabling these advanced LLMs to contribute effectively. Several studies [6] have attempted to generate false news by rewriting factual content. We extend this paradigm further by conceptualizing the generation of harmful content as a counterfactual [14] and negative style transfer of benign content, as demonstrated in Table 2. Consequently, generating diverse harmful content primarily requires producing diverse benign content as the foundation. To achieve this, we define data outlines and extract harmful data generation queries Qhs,whichindicatethedesireddataobjectives,style,andsubjectmatter. Then, we transform Qhinto a benign query Qbwith textual mapping rationale Rmvia counterfactual mapping and uses Qbfor benign template Tbgenerating: (Qb, Rm) =MCQ (Pctf⊕Qh), Tb=MCQ (Qb),(1) where MCQ is the model crowdsourcing queue, Pctfis the counterfactual map- ping prompt, and benign template Tbcan be a long document or batch short texts in a structured format. The use of counterfactual mapping ensures that strong LLMs contribute their superior capabilities to generate harmful content. 4.3 Adversarial Content Toxicifying After generating diverse high-quality benign templates through Counterfactual Mapping, we adversarially transform these templates into harmful content in Adversarial Content Toxicifying (ACT), which decomposes the benign template into multiple semantic units for toxification with three main steps. – Semantic Decomposition. We first segment the benign template Tbinto semantic units {u1, u2, ..., u n}, where each unit consists of several sentences, andnis the quantity of units. This multi-sentence segmentation strategy is designed to maintain coherence in model responses, as single-sentence PoisonSwarm: Universal Harmful Information Synthesis 9 Counterfactual Mapping Prompt ( Pctf) In a certain domain, we often explore the following Query: {query} Now, as an interdisciplinary expert, please transform this question into a different domain. Considering the characteristics and background of the new domain,
https://arxiv.org/abs/2505.21184v1