text
string
source
string
generally performs poorly across the datasets, which shows the importance of candidate toxic words filtering. (2) ft-SCOPE consistently outperforms SCOPE across all the metrics and datasets. This shows the significant domain gap between general CSC pretraining data and toxic speech corpora. Further, our method C2TU-BERT, which is not fine-tuned, beats ft-SCOPE across all the comparisons. Although fine-tuning helps memorize toxic knowledge, it only memorizes fixed toxicity mapping patterns in the training data. However, our model injected with homo-graph and toxic lexicon has stronger generalizability, which can perceive more cloaked toxicities. (3) The LLM-based model Simple-CSC exhibits undesired results in some cases, which are even worse than BERT-based models. For example, in the sentence-level toxicity detection on the COLDataset , Simple-CSC achieves a accuracy of 53.85% while that of ft-SCOPE is 63.52%. This can be attributed to the limited cloaked toxic knowledge of LLMs, which is in accordance with the poor performance of prompt-based LLM models Baichuan2-7B-Base and Deepseek-V3. (4) Our models C2TU-LLM and C2TU-BERT take the first two ranks in all contests. On the one hand, they use homo-graph and toxic lexicon to match all the potential toxic words in a given text 7 Table 2: Character-level analysis between the Naive method with C2TU on ToxicloakCN and COLDataset . T/F means True/False. P/N means Positive/Negative. All results are normalized by dividing the raw values by the total number of sentences and characters in the dataset. We highlight thebest score in bold. Dataset ToxicloakCN Detection Correction Metrics/% TP ↑ FP↓ FN↓ TN↑ recall↑precision ↑TP↑ FP↓ FN↓ TN↑ recall↑precision ↑ Naive 2.41 18.74 0.19 78.67 92.74 11.39 2.02 18.74 0.58 78.67 77.83 9.74 C2TU-BERT 1.55 0.29 1.05 97.11 59.54 84.25 1.42 0.29 1.18 97.11 54.50 83.04 C2TU-LLM 1.79 0.18 0.80 97.23 69.09 91.04 1.67 0.18 0.93 97.23 64.13 90.41 Dataset COLDataset Detection Correction Metrics/% TP ↑ FP↓ FN↓ TN↑ recall↑precision ↑TP↑ FP↓ FN↓ TN↑ recall↑precision ↑ Naive 5.99 0.94 0.33 92.74 94.72 86.50 5.64 0.94 0.69 92.74 89.11 85.77 C2TU-BERT 5.35 0.08 0.98 93.59 86.83 99.51 5.24 0.08 1.09 93.59 85.51 99.50 C2TU-LLM 5.6 0.02 0.73 93.66 88.42 99.71 5.55 0.02 0.78 93.66 87.66 99.70 sequence, which increases true positive rate. On the other hand, they further utilize a filtering step to mitigate the risk that transforms non-toxic words into toxic, which decreases false positive rate. 4.3 Analysis of Candidate Toxic Word Filtering As mentioned in Section 3.2, the Naive method that includes toxic word matching only and discards candidate toxic word filtering could lead to small false negative rate at the cost of high false positive rate. To verify the claim, we next show the results of True/False Positive/Negative rate ,recall and precision in Table 2. From the table, Naive has better TP and FN rates than our methods, which results in higher recall values. This is because Naive regards all the candidate toxic substrings as the “true” toxic words. However, this leads to over-detection and over-correction, which categorizes a large number of non-toxic words to be toxic. Our methods further filter out candidate toxic words that are non-toxic and thus achieve better
https://arxiv.org/abs/2505.22184v1
FP and TN rates, and higher precision scores. For example, On theToxicloakCN dataset, in the correction task, Naive has a precision of 9.74%, while that of our model C2TU-LLM is 90.41%, which is around 10 ×larger. In summary, our methods significantly improve precision, at the cost of a mild drop in recall, which finally lead to larger F1 scores. 4.4 Efficiency study /uni00000013 /uni00000014/uni00000013 /uni00000015/uni00000013 /uni00000016/uni00000013 /uni00000017/uni00000013 /uni00000037/uni0000004c/uni00000050/uni00000048/uni00000003/uni0000000b/uni0000004b/uni00000052/uni00000058/uni00000055/uni00000056/uni0000000c/uni00000013/uni00000015/uni00000013/uni00000017/uni00000013/uni00000019/uni00000013/uni0000001b/uni00000013/uni00000029/uni00000014/uni00000003/uni00000036/uni00000046/uni00000052/uni00000055/uni00000048/uni00000012/uni00000008/uni00000025/uni00000044/uni0000004c/uni00000046/uni0000004b/uni00000058/uni00000044/uni00000051/uni00000015/uni00000010/uni0000001a/uni00000025 /uni00000036/uni0000004c/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000010/uni00000026/uni00000036/uni00000026 /uni00000026/uni000000f0/uni00000037/uni00000038/uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000026/uni000000f0/uni00000037/uni00000038/uni00000010/uni0000002f/uni0000002f/uni00000030 /uni00000037/uni00000052/uni0000005b/uni0000004c/uni00000046/uni0000004f/uni00000052/uni00000044/uni0000004e/uni00000026/uni00000031 /uni00000026/uni00000032/uni0000002f/uni00000027/uni00000044/uni00000057/uni00000044/uni00000056/uni00000048/uni00000057 Figure 3: Efficiency study on the sentence-level correction task.We next analyze the model efficiency. We conduct experiments on two datasets, where ▲indicates COLDataset and⋆represents ToxicloakCN . We use different colors to de- note different methods including C2TU-LLM, C2TU-BERT, Simple-CSC and Baichuan2-7B- Base. For fairness, all LLM-based methods are based on Baichuan2-7B-Base. The results are presented in Figure 3. From the figure, we ob- serve: (1) As LLM-based methods, C2TU-LLM is significantly more efficient than Simple-CSC and Baichuan2-7B-Base. This is because C2TU- LLM only uses LLM to output sentence prob- ability difference, while the other two methods perform token generation. (2) C2TU-BERT is more efficient than C2TU-LLM due to the less number of model parameters, but it is at the cost of lower model effectiveness. In summary, our models are highly effective and also efficient. 4.5 Performance Across LLMs We next evaluate the performance of C2TU-LLM across different LLMs, including LLaMA3-8B [ 1], Qwen2.5-7B [ 23] and Baichuan2-7B [ 25]. We take Simple-CSC and Deepseek-V3 as our baselines, as 8 Table 3: The results of C2TU-LLM with different LLMs w.r.t. F1 score and accuracy metrics, where Det. and Cor. mean detection and correction. We highlight the best score in bold. Metric/% Method ModelToxicloakCN COLDataset Sentence Character Sentence Character Det. Cor. Det. Cor. Det. Cor. Det. Cor. F1 scoreSimple-CSC Baichuan2-13B 47.81 41.28 50.18 43.77 59.60 57.36 71.60 69.28 Prompt-based Deepseek-V3 28.66 20.21 17.15 8.56 40.89 31.99 40.15 26.84 C2TU-LLMLLaMA3-8B 68.74 64.30 74.44 70.98 89.55 86.25 92.74 91.97 Qwen2.5-7B 67.20 63.52 73.46 70.19 89.37 86.30 92.41 91.73 Baichuan2-7B 74.67 70.01 78.56 75.04 90.52 88.54 93.73 93.30 AccuracySimple-CSC Baichuan2-13B 66.38 63.58 98.13 97.97 53.85 51.99 97.15 96.97 Prompt-based Deepseek-V3 46.36 41.04 83.46 82.57 40.59 32.26 85.67 83.82 C2TU-LLMLLaMA3-8B 79.83 77.71 98.85 98.73 87.02 83.98 99.14 99.06 Qwen2.5-7B 78.81 77.04 98.80 98.69 86.94 84.12 99.10 99.03 Baichuan2-7B 83.60 81.36 99.02 98.89 88.37 86.55 99.25 99.20 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013 /uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni00000013/uni00000011/uni0000001a/uni00000015/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000017/uni00000017 /uni00000013/uni00000011/uni00000017/uni00000016/uni00000016/uni00000013/uni00000011/uni0000001a/uni00000019/uni00000018/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000017/uni00000037/uni00000052/uni0000005b/uni0000004c/uni00000046/uni0000004f/uni00000052/uni00000044/uni0000004e/uni00000026/uni00000031 /uni00000026/uni000000f0/uni00000037/uni00000026/uni000000f0/uni00000010/uni0000003a/uni00000033 /uni00000026/uni000000f0/uni00000037/uni00000026/uni000000f0/uni00000010/uni00000036/uni00000035 /uni00000026/uni000000f0/uni00000037/uni00000026/uni000000f0/uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni0000001a/uni00000015/uni00000013/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000017/uni00000018/uni0000001c/uni00000013/uni00000011/uni0000001a/uni00000015/uni00000015/uni00000013/uni00000011/uni0000001b/uni00000019/uni00000018/uni00000026/uni00000032/uni0000002f/uni00000027/uni00000044/uni00000057/uni00000044/uni00000056/uni00000048/uni00000057 Figure 4: Sentence level correction accuracy gap between C2TU , C2TU-WP and C2TU-SR on both datasets. The BERT-based model is employed on bert-chinese-base and the LLM-based model is employed on Baichuan2-7B-Base. they are also LLM-based. The results are shown in Table 3. While Simple-CSC employs Baichuan2- 13B-Base and Deepseek-V3 is 671B, our method consistently outperforms them with different LLMs of smaller parameter scales (7B and 8B). This further shows the effectiveness of our method with homo-graph and Chinese toxic lexicon, which inject knowledge of cloaked toxicity into LLMs. 4.6 Ablation Study We also conduct an ablation study on C2TU to understand the characteristics of its main components. Given the auto-regressive nature of LLMs, it is direct to predict the occurrence
https://arxiv.org/abs/2505.22184v1
probability of a word based only on its preceding context Xpre. This helps us understand the importance of the sentence probability method that leverages full context in the filtering stage. We call this method C2TU-WP (WordProbability). Further, filtering candidate toxic words in Algorithm 2 involves multiple rounds. We introduce a single round strategy, in which each (w, l)pair satisfying Pw< P lis replaced. This helps us understand the importance of the iterative strategy for filtering. We call this method C2TU-SR (Single Round). We then compare C2TU with C2TU-WP and C2TU-SR across the two datasets. Due to the space limitation, we only show the results on the accuracy of sentence-level correction in Figure 4. For other cases, we observe similar results, detailed in Appendix I. From the figure, we see that: (1) C2TU-LLM significantly beats C2TU-WP. This indicates that Xtailalso provides essential semantic information and leveraging the full text context for filtering candidate toxic words is necessary. (2) 9 For both C2TU-BERT and C2TU-LLM, they consistently outperform C2TU-SR across the datasets. This is because C2TU replaces the (w, l)pair with the largest probability difference in each round, which iteratively enhances the context of remaining (w, l)pairs and the correction accuracy. 5 Conclusion In this paper, we presented C2TU, a training-free and prompt-free method for unveiling Chinese cloaked toxic contents. Specifically, our method consists of two stages: Chinese cloaked toxicity matching and filtering. For the former, we constructed the homo-graph to capture similar homophone relationships between Chinese characters. Subsequently, we matched toxic words via the homo-graph and toxic lexicon. To mitigate over-correction issues, we employed language models (BERT-based and LLM-based) to filter candidate toxic words iteratively. Comprehensive experimental results show that C2TU achieves superior performance in Chinese cloaked toxicity unveiling. 10 References [1] AI@Meta. Llama 3 model card. 2024. [2]A. Arora, P. Nakov, M. Hardalov, S. M. Sarwar, V . Nayak, Y . Dinkov, D. Zlatkova, K. Dent, A. Bhatawdekar, G. Bouchard, et al. Detecting harmful content on online platforms: what platforms need vs. where research efforts go. ACM Computing Surveys , 56(3):1–17, 2023. [3]T. Bayes. An essay towards solving a problem in the doctrine of chances. Biometrika , 45(3- 4):296–315, 1958. [4]X. Cheng, W. Xu, K. Chen, S. Jiang, F. Wang, T. Wang, W. Chu, and Y . Qi. Spellgcn: Incorporating phonological and visual similarities into language models for chinese spelling check. arXiv preprint arXiv:2004.14166 , 2020. [5] C. Cui. Jionlp, 2020. [6]T. Davidson, D. Warmsley, M. Macy, and I. Weber. Automated hate speech detection and the problem of offensive language. In Proceedings of the international AAAI conference on web and social media , volume 11, pages 512–515, 2017. [7]J. Deng, J. Zhou, H. Sun, C. Zheng, F. Mi, H. Meng, and M. Huang. Cold: A benchmark for chinese offensive language detection. arXiv preprint arXiv:2201.06025 , 2022. [8]J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 conference of the North American chapter of the association for computational linguistics: human language technolo- gies, volume 1 (long and short papers) ,
https://arxiv.org/abs/2505.22184v1
pages 4171–4186, 2019. [9]G. Dong, J. Zhao, T. Hui, D. Guo, W. Wang, B. Feng, Y . Qiu, Z. Gongque, K. He, Z. Wang, et al. Revisit input perturbation problems for llms: A unified robustness evaluation framework for noisy slot filling task. In CCF International Conference on Natural Language Processing and Chinese Computing , pages 682–694. Springer, 2023. [10] M. Dong, Y . Chen, M. Zhang, H. Sun, and T. He. Rich semantic knowledge enhanced large language models for few-shot chinese spell checking. arXiv preprint arXiv:2403.08492 , 2024. [11] S. Ghosh, P. Varshney, E. Galinkin, and C. Parisien. Aegis: Online adaptive ai content safety moderation with ensemble of llm experts. arXiv preprint arXiv:2404.05993 , 2024. [12] H. Gong, Y . Li, S. Bhat, and P. Viswanath. Context-sensitive malicious spelling error correction. InThe World Wide Web Conference , pages 2771–2777, 2019. [13] N. Jafari, J. Allan, and S. M. Sarwar. Target span detection for implicit harmful content. InProceedings of the 2024 ACM SIGIR International Conference on Theory of Information Retrieval , pages 117–122, 2024. [14] A. Jiang, X. Yang, Y . Liu, and A. Zubiaga. Swsr: A chinese dataset and lexicon for online sexism detection. Online Social Networks and Media , 27:100182, 2022. [15] H. Kirk, B. Vidgen, P. Rottger, T. Thrush, and S. A. Hale. Hatemoji: A test suite and adversarially-generated dataset for benchmarking and detecting emoji-based hate. In M. Carpuat, M.-C. de Marneffe, and I. V . Meza Ruiz, editors, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 1352–1368, Seattle, United States, July 2022. Association for Computa- tional Linguistics. [16] J. Li, Q. Wang, Z. Mao, J. Guo, Y . Yang, and Y . Zhang. Improving chinese spelling check by character pronunciation prediction: The effects of adaptivity and granularity. arXiv preprint arXiv:2210.10996 , 2022. [17] K. Li, Y . Hu, L. He, F. Meng, and J. Zhou. C-llm: Learn to check chinese spelling errors character by character. arXiv preprint arXiv:2406.16536 , 2024. 11 [18] A. Liu, B. Feng, B. Xue, B. Wang, B. Wu, C. Lu, C. Zhao, C. Deng, C. Zhang, C. Ruan, et al. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 , 2024. [19] J. Lu, B. Xu, X. Zhang, C. Min, L. Yang, and H. Lin. Facilitating fine-grained detection of chinese toxic language: Hierarchical taxonomy, resources, and benchmarks. arXiv preprint arXiv:2305.04446 , 2023. [20] M. Phute, A. Helbling, M. Hull, S. Peng, S. Szyller, C. Cornelius, and D. H. Chau. Llm self de- fense: By self examination, llms know they are being tricked. arXiv preprint arXiv:2308.07308 , 2023. [21] X. Rao, Y . Zhang, S. Peng, Q. Jia, and X. Liu. Chinese hate speech detection method based on RoBERTa-WWM. In M. Sun, B. Qin, X. Qiu, J. Jiang, and X. Han, editors, Proceedings of the 22nd Chinese National Conference on Computational Linguistics , pages 501–511, Harbin, China, Aug. 2023. Chinese Information Processing Society of China. [22] A. Sheth, V . L. Shalin, and U. Kursuncu. Defining and detecting toxicity on social media: context and knowledge are
https://arxiv.org/abs/2505.22184v1
key. Neurocomputing , 490:312–318, 2022. [23] Q. Team. Qwen2.5: A party of foundation models, September 2024. [24] Y . Xiao, Y . Hu, K. T. W. Choo, and R. K.-w. Lee. Toxicloakcn: Evaluating robustness of offen- sive language detection in chinese with cloaking perturbations. arXiv preprint arXiv:2406.12223 , 2024. [25] A. Yang, B. Xiao, B. Wang, B. Zhang, C. Bian, C. Yin, C. Lv, D. Pan, D. Wang, D. Yan, et al. Baichuan 2: Open large-scale language models. arXiv preprint arXiv:2309.10305 , 2023. [26] H. Zhang, H. Gao, Q. Hu, G. Chen, L. Yang, B. Jing, H. Wei, B. Wang, H. Bai, and L. Yang. Chinesesafe: A chinese benchmark for evaluating safety in large language models. arXiv preprint arXiv:2410.18491 , 2024. [27] S. Zhang, H. Huang, J. Liu, and H. Li. Spelling error correction with soft-masked bert. arXiv preprint arXiv:2005.07421 , 2020. [28] Y . Zhao, J. Zhu, C. Xu, and X. Li. Enhancing llm-based hatred and toxicity detection with meta-toxic knowledge graph. arXiv preprint arXiv:2412.15268 , 2024. [29] H. Zhou, Z. Li, B. Zhang, C. Li, S. Lai, J. Zhang, F. Huang, and M. Zhang. A simple yet effective training-free prompt-free approach to chinese spelling correction based on large language models. arXiv preprint arXiv:2410.04027 , 2024. [30] J. Zhou, J. Deng, F. Mi, Y . Li, Y . Wang, M. Huang, X. Jiang, Q. Liu, and H. Meng. Towards identifying social bias in dialog systems: Framework, dataset, and benchmark. In Y . Goldberg, Z. Kozareva, and Y . Zhang, editors, Findings of the Association for Computational Linguistics: EMNLP 2022 , pages 3576–3591, Abu Dhabi, United Arab Emirates, Dec. 2022. Association for Computational Linguistics. A Broader Impact While our dataset contains substantial hate-related content, our work strictly focuses on unveiling cloaked toxicity to enhance online governance. Crucially, our method neither generates new toxicity nor facilitates its propagation, ensuring net positive societal impact through safer digital spaces. B Limitation Discussion While our method solves character-substitution cloak (the dominant case), other cloak types like character-splitting, cross-lingual homophones, and emoji replacements are not addressed in our paper, which will be studied in our future work. 12 C Theoretical Proof Proof C.1 Given a text sequence X= [Xpre, w, X tail]and a toxic word lwithlen(w)=len(l)=N, letX′denote the new sequence with wreplaced by l. Then, the following equation holds: ProbDiff( Pw, Pl) = log PLLM(X)−logPLLM(X′) (6) where PLLM(X)andPLLM(X′)denote the probability of XandX′output by LLM, respectively. Both the probabilities can be calculated in the auto-regressive manner as Equation 4. Firstly, we derive the general formula for the probability PLLM(w|Xpre, Xtail): PLLM(w|Xpre, Xtail) =PLLM(Xpre, w, X tail) PLLM(Xpre, Xtail) =PLLM(X) PLLM(Xpre, Xtail)(7) Analogously: PLLM(l|Xpre, Xtail) =PLLM(X′) PLLM(Xpre, Xtail)(8) Therefore: ProbDiff( Pw, Pl) = log Pw−logPl = log PLLM(w|Xpre, Xtail)−logPLLM(l|Xpre, Xtail) = logPLLM(X) PLLM(Xpre, Xtail)−logPLLM(X′) PLLM(Xpre, Xtail) = log(PLLM(X) PLLM(Xpre, Xtail)·PLLM(Xpre, Xtail) PLLM(X′)) = logPLLM(X) PLLM(X′) = log PLLM(X)−logPLLM(X′)(9) Q.E.D. D Details of Metrics We evaluate the model performance by F1 score and accuracy from two aspects (sentence-level and character-level), with each one containing both detection and correction tasks. Firstly we define the computation method of T/F (true/false) P/N (positive/negative)
https://arxiv.org/abs/2505.22184v1
in each level and task. For sentence level, given the sample pair of (source sentence and target sentence) and the correspond- ing corrected sentence, if the length of corrected sentence is not equal to the target sentence, the result is FN for both detection and correction tasks. Otherwise, we have the following definition: Sentence Level Detection (1) If the source sentence is different to the target sentence, the source sentence needs to be corrected. In this situation, if all wrong characters in source sentence is corrected (no matter the correction is correct or not), and no proper character is over-corrected, the result is TP. Otherwise, the result is FN. (2) If the source sentence is identical to the target sentence, the source sentence doesn’t need to be corrected. In this situation, if the corrected sentence is same to the target sentence, the result is TN. Otherwise, the result is FP. Sentence Level Correction (1) If the source sentence is different to the target sentence, the source sentence needs to be corrected. In this situation, if all wrong characters in source sentence is corrected correctly, and no proper character is over-corrected, the result is TP. Otherwise, the result is FN. (2) If the source sentence is identical to the target sentence, the source sentence doesn’t need to be corrected. In this situation, if the corrected sentence is same to the target sentence, the result is TN. Otherwise, the result is FP. 13 For character level, given the sample pair of (source character and target character) and the corre- sponding corrected character: Character Level Detection (1) If the source character is different to the target character, the source character needs to be corrected. In this situation, if the corrected character is not same to the source character (no matter the correction is correct or not), the result is TP. Otherwise, the result is FN. (2) If the source character is identical to the target character, the source character doesn’t need to be corrected. In this situation, if the corrected character is same to the target character, the result is TN. Otherwise, the result is FP. Character Level Correction (1) If the source character is different to the target character, the source character needs to be corrected. In this situation, if the source character is corrected correctly, the result is TP. Otherwise, the result is FN. (2) If the source character is identical to the target character, the source character doesn’t need to be corrected. In this situation, if the corrected character is same to the target character, the result is TN. Otherwise, the result is FP. Based on these definitions, we can calculate the F1 score ( F1) and the accuracy ( Acc) by: F1=2pr p+r(10) Acc=TP+TN TP+FN+FP+TN(11) where p=TP TP+FPandr=TP TP+FN. E Details of Datasets We conduct experiments on two public datasets: ToxicloakCN [24] and COLDataset [7], and finetune SCOPE [ 16] model on CHSD-subset constructed from CHSD [21]. Details on these datasets are given in Table 4. Homo-Graph We use pypinyin to capture tone-ignored pinyin of all Chinese characters in the dataset
https://arxiv.org/abs/2505.22184v1
and connect characters with identical or phonetically similar pinyins, which is defined in Section 3.1.1. The details of homo-graphs is shown in Table 5. Toxic Lexicon (1)ToxicloakCN is an enhanced dataset derived from ToxiCN [19], where JioNLP [ 5] and NMSL4are applied to perform homophone substitution and semantically simi- lar emoji substitution on toxic speech, respectively [ 24]. For keyword-based substitution, the toxic lexicon from ToxiCN is used as keywords. For cleaner lexicon without cloak, we further correct and deduplicate the ToxiCN ’s lexicon. We first manually correct all toxic words in the original lexicon back to their protowords, and then retain each protoword. (2) COLDataset is collected through web crawling based on keywords, comprising 37,480 contents and spanning a wide range of topics related to race, gender, and regional issues. Since no existing work has compiled a toxic lexicon for COLDataset , we instead utilize the crawler keywords used during dataset construction[ 7] as toxic lexicon. Based on the lexicon, we merge the validation set with the test set and apply JioNLP [ 5] to perform homophone substitution and finally get the cloaked dataset, following the noise injection procedure of ToxicloakCN [24]. The final toxic lexicons are summarized in Appendix H. Dataset for Finetuning To ensure fair comparison between our model and CSC model, we further fine-tune BERT-based CSC model SCOPE [ 16] on a subset of CHSD [21], which integrates datasets COLDataset [7],SWSR [14] and CDIAL-BIASDATASET [30] to balance the distribution difference of toxic contents (see Table 6). To construct the CHSD-subset , we first filter out samples that appear in the part of COLDataset used in the test set. Next, we select samples based on toxic keywords from the two lexicons of ToxicloakCN andCOLDataset , ensuring that our fine-tuning data contains relevant toxic content. Then we merge the two lexicons and apply JioNLP [ 5] to introduce noise. Note that we 4https://github.com/THUzhangga/NMSL 14 use the original lexicon of ToxiCN here. Finally, we construct CHSD-subset with 2790 samples that consists of 80% training and 20% validation data. Then we finetune the pre-trained SCOPE model with following settings: learning rate= 5e−5, batch size= 32, accumulate grad batches= 2, epoch= 20, warmup proporation= 0.1. Finally, the details of CHSD-subset is presented in Table 6. Table 4: Dataset size and lexicon size of ToxicloakCN ,COLDataset and CHSD-subset. Dataset Size Lexicon size ToxicloakCN 4,586 387 COLDataset 10,415 115 CHSD-subset 2,790 602 Table 5: Homo-graph size of ToxicloakCN andCOLDataset . Dataset Node size Edge size ToxicloakCN 3,613 141,683 COLDataset 3,879 163,969 Table 6: The distribution of the CHSD-subset . T.L refers to ToxiCN ’s lexicon, while C.L refers to COLDataset ’s lexicon. We consider a sample to be aligned with the distribution of a given dataset if it contains toxic word from that dataset’s lexicon. w/ T.L w/o T.L w/ C.L 1,706 542 w/o C.L 542 0 F Pseudocode of Matching and Filtering F.1 Toxic Words Matching Algorithm Algorithm 1 Toxic Words Matching Input: User comment X, lexicon L, homo-graph G Output: Potentially cloaked toxic words Wp 1:W←SlidingWindow( X), Wp← ∅ 2:foreachw(i)inW, each l(j)inLdo 3:
https://arxiv.org/abs/2505.22184v1
iflen(w(i)) = len( l(j))then 4: N←len(w(i)), flag ←True 5: foreachk∈ {1,2,···, N}do 6: ifG.HasEdge( w(i) k, l(j) k)̸= 1then 7: flag←False 8: end if 9: end for 10: ifflag = True then 11: Wp←Wp∪ {(w(i), l(j))} 12: end if 13: end if 14:end for 15:return Wp 15 F.2 Filtering Toxic Words Algorithm Algorithm 2 Filtering Toxic Words Input: X, W p, P(·) Output: Corrected text X′ 1:Initialize X′←X 2:while Wpis not empty do 3: foreach tuple (w(i), l(j))inWpdo 4: Xpre, Xtail←GetPreTail (X′, w(i)) 5: Pw(i)←P(w(i)|Xpre, Xtail) 6: Pl(i)←P(l(i)|Xpre, Xtail) 7: di←ProbDiff (Pw(i), Pl(i)) 8: end for 9: Get the index with the most significant difference: k←argmin idi 10: ifPw(k)< Pl(k)then 11: Xpre, Xtail←GetPreTail (X′, w(k)) 12: Replace: X′←[Xpre, l(k), Xtail] 13: Wp←Wp\ {(w(k), l(k))} 14: else 15: break 16: end if 17:end while 18:return X′ G Prompt Template Our prompt-based framework utilizes the following template structure to unveiling the cloaked toxicity: Table 7: Prompt template for prompt-based method. Prompt template for prompt-based method. - System: 任务: 针对中文敏感信息中带掩盖的错别字纠正。 Task: Correcting obfuscated typos in Chinese toxic text. 要求: 1.需要严格保证输入句子和输出句子长度一致; 2.只需要返回纠正后的句子,不要输出任何其他内容。 Requirements: 1. Input-output length consistency is mandatory; 2. Return only the corrected sentence without any additional output. - User: <Input sentence> - Assistant: <Output sentence> H Toxic Lexicon We present toxic lexicons of ToxicloakCN andCOLDataset here, splitting each lexicon by toxic words’ length. Specifically, the toxic lexicon of ToxicloakCN is derived from the lexicon proposed 16 inToxiCN , with additional noise reduction and deduplication. Each toxic word is restored to its proto form, as the original lexicon contains a large number of cloaked toxic words. The toxic lexicon of COLDataset , on the other hand, is constructed from the crawler keywords that were used during the data collection process. Table 8: ToxicloakCN ’s Lexicon. ToxicloakCN ’s Lexicon. 1-length: {“ 魄”, “蠢”, “鸡”, “原”, “亩”, “驴”, “蛮”, “干”, “”, “呸”, “孽”, “蛆”, “批”, “”, “吊”, “苟”, “操”, “婊”, “狗”, “默”, “粪”, “猪”, “贱”, “骚”, “瞎”, “倭”} 2-length: {“ 牲口”, “黑贵”, “反同”, “烂货”, “黑杂”, “胡建”, “废物”, “京巴”, “口区 ”, “魔怔”, “国 男”, “妓女 ”, “鬼话”, “捧杀”, “色胚”, “小鬼”, “脑瘫”, “贵物”, “黑皮”, “艾基”, “眼瞎 ”, “疯狗”, “虫 族”, “黑犬”, “龟奴”, “女拳”, “开苞”, “八婆”, “穆狗”, “婊子”, “丁丁 ”, “骚女”, “菊花”, “媚黑”, “黑 吹”, “歪皮”, “诡雷”, “肥猪”, “傻呗”, “坦克”, “孝子 ”, “煤精”, “白完”, “龟男”, “国女”, “瘠薄”, “黑 混”, “人妖”, “孙杂”, “黑蛆”, “冲爆”, “鲍鱼 ”, “命贵”, “虫混”, “色狼”, “爪牙 ”, “棒男”, “男拳”, “媚 白”, “操蛋”, “黑砸”, “变态”, “反黑”, “蠢驴”, “绝育”, “男淫”, “鸵鸟 ”, “拳虱”, “鬼子”, “母狗”, “直 佬”, “白莲”, “杂碎”, “染艾”, “母坦”, “货色”, “三哥”, “满子”, “沙软”, “嗨人”, “伞兵”, “特么”, “智 障”, “僵尸”, “巨婴”, “黑鬼”, “东夷”, “奴才”, “妈的”, “下头”, “放屁”, “基蛆”, “黑爹”, “渣子”, “黑 狗”, “黑虫”, “腐癌”, “舔狗”, “鸡巴”, “肖万”, “南蛮”, “黑黑 ”, “娘们”, “类人”, “勾八”, “棒子”, “腐 女”, “垃圾 ”, “人猿”, “你妈”, “棒女”, “屠黑”, “绿茶”, “暴毙”, “乞丐 ”, “孽畜”, “反默”, “嘴炮”, “屠 默”, “黑男”, “母猪”, “撑同”, “恶熏”, “狗屁”, “男人”, “畜牲”, “撒币”, “打拳 ”, “小丑”, “母的”, “喃 蛮”, “默人”, “恐同”, “灭默”, “猩猩 ”, “北佬”, “弱智”, “母畜”, “烧鸡”, “女贼”, “织女”, “恋童”, “杂 皮”, “娘炮”, “猪精”, “串串 ”, “老鼠”, “跪舔”, “跪洋”, “公狗”, “乐色”, “蛮子”, “黑女”, “母朱”, “巴 铁”, “双标”, “犬男”, “反白”, “尼哥”, “日杂”,
https://arxiv.org/abs/2505.22184v1
“母蛆”, “杠精”, “猎默”, “狗贼”, “鬼母”, “恶心”, “白 男”, “傻子”, “混黑”, “杀默”, “断袖”, “西戎”, “国铝”, “沙口”, “强奸”, “母人”, “屠同”, “北狄”, “白 皮”, “跪族”, “默妖”, “厌女”, “活该”, “绿帽”, “黑畜”, “畜生 ”, “黑逼”, “阿娜”, “网暴”, “黑族”, “普 信”, “粪蛋”, “傻逼”, “黑粪”, “男同”, “舔黑”, “西八”, “圣母”, “呆子”, “牛马”, “东百”, “喷子”, “同 志”, “虫类”, “阿三”, “窑姐”, “拳畜”, “基佬”, “矮子”, “瘪三”, “蛮夷”, “倭寇”, “杂种”, “拳师”, “干 死”, “奴隶”, “憨憨 ”, “傻卵”, “他妈”, “蛀虫 ”, “造孽”, “湾湾 ”, “去死”, “小黑”, “黑淫”, “母拳”, “走 狗”, “黑哥”, “倭狗”, “鸟事”, “百越”, “逼”, “崽子”, “儒猴”, “败类”, “憨批”, “三非”, “杂毛”, “鼠 鼠”, “爆杀”, “神经”, “非洲”, “仙女”, “洋爹”} 3-length: {“黑猩猩 ”, “山东葱”, “狗东西”, “黑泥鳅”, “奇趣蛋”, “昆仑奴”, “女厕所”, “漂亮国”, “山 越猴”, “熊孩子 ”, “哥布林”, “田园婊”, “田园女”, “绿茶婊”, “龟仙人 ”, “黑杂碎”, “子宫战”, “狗腿 子”, “犹太狗”, “强奸犯”, “繁殖批”, “非洲人”, “黑命贵”, “南宋人”, “黑子哥”, “歪果仁”, “烂裤子”, “你妈的”, “慰安妇”, “金针菇”, “类人猿”, “小仙女”, “小屁孩”, “狗日的”, “黑玩意”, “法西斯”, “陕 蛋蛋 ”, “给爷爬 ”, “妈宝男”, “洋垃圾 ”, “九头鸟”, “黑社会”, “吸血鬼”, “黑乐色”, “洋大人”, “小鬼 子”, “烂裤裆 ”, “小吊子”, “死一死”, “美国佬”, “乡巴佬”, “小日本”, “黑非洲”, “他妈的”, “二极管”, “黑妹妹 ”, “牛头人”, “非洲佬”, “黑猴子”, “泥娃娃 ”, “妈宝女”, “同杏恋”, “神经病”, “白皮猪”, “脑 残女”, “黑哥哥 ”, “反三非”, “南大人”, “肉便器”, “黑娃娃 ”, “小日子”, “普信女”, “凯勒奇”, “黑沙 口”, “绿毛龟”, “普信男”, “街溜子”, “精神病”, “乐子人”, “黑叔叔 ”, “小二黑”, “艾滋佬”, “通讯录”, “黑杂种”, “黑北鼻”, “下三滥”, “羊大人”, “搅屎棍”, “红脖子”, “乌龟精”, “偷井盖”, “有大病”, “站 街女”, “吸血虫”, “铁花生”, “黑小子”, “直男癌”, “寄生虫”} 4-length: {“ 红毛鬼子”, “你他妈的”, “迟早要完”, “人造子宫 ”, “牛鬼蛇神”, “你妈妈的”, “黑不溜 秋”, “高卢乌鸡”, “鬼子虫类”, “崇洋媚外”, “捏妈妈的”, “龙阳之好”, “田园女权”, “七尺大儒”} 5-length: {“ 中华田园犬”, “凯勒奇计划”, “一棒子打死”} 6-length: {“ 国际鬼子虫类”} 7-length: {“你是个什么东 西”} 17 Table 9: COLDataset ’s Lexicon. COLDataset ’s Lexicon. 2-length: {“黑鬼”, “黑人”, “非裔”, “黑奴”, “黑命”, “白佬”, “白人”, “阿三”, “华人”, “亚裔”, “移 民”, “日本”, “韩国”, “日韩”, “外省”, “内地”, “大陆”, “乡下 ”, “外地”, “农村”, “河北”, “山西”, “辽 宁”, “吉林”, “江苏”, “浙江”, “安徽”, “福建”, “江西”, “山东”, “河南”, “湖北”, “湖南”, “广东”, “海 南”, “四川”, “贵州”, “云南”, “陕西”, “甘肃”, “青海”, “台湾”, “广西”, “西藏”, “宁夏”, “新疆”, “香 港”, “澳门”, “北京”, “天津”, “上海”, “重庆”, “苏北”, “苏南”, “东北”, “长春”, “赤峰”, “潮汕”, “西 北”, “女权”, “女拳”, “男权”, “男拳”, “小三”, “妇女 ”, “主妇”, “女人”, “女性”, “女生”, “女方”, “女 友”, “男友”, “老婆”, “女孩”, “少女”, “女神”, “男人”, “男性”, “男生 ”, “男方”, “老公”, “丈夫”, “男 孩”, “爸爸 ”, “妈妈 ”} 3-length: {“ 非洲裔”, “非洲裔”, “印度人”, “中国人”, “黄种人”, “黄皮子”, “拉丁裔”, “东南亚”, “犹 太人”, “南亚裔”, “亚洲人”, “地域黑”, “城里人”, “黑龙江”, “内蒙古”, “江浙沪”, “同性恋 ”, “外地 女”, “女司机”, “女朋友”, “男朋友”, “肌肉男”} 4-length: {“白人女性”, “白人老爷”, “种族偏见”, “种族歧视”, “地域偏见”, “地域歧视”, “性别歧 视”, “重男轻女”} I Full Results of Ablation Study Full results of ablation study between C2TU, C2TU-WP and C2TU-SR on both datasets. The C2TU- BERT is employed on bert-chinese-base and the C2TU-LLM is employed on Baichuan2-7B-Base. /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000037/uni00000052/uni0000005b/uni0000004c/uni00000046/uni0000004f/uni00000052/uni00000044/uni0000004e/uni00000026/uni00000031/uni00000003/uni0000000b/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni0000000c/uni00000013/uni00000011/uni0000001a/uni00000018/uni0000001c /uni00000013/uni00000011/uni0000001a/uni00000019/uni00000016 /uni00000013/uni00000011/uni00000017/uni00000018/uni00000018/uni00000013/uni00000011/uni0000001b/uni00000013/uni0000001a/uni00000013/uni00000011/uni0000001b/uni00000016/uni00000019/uni00000036/uni00000048/uni00000051/uni00000057/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni00000003/uni00000047/uni00000048/uni00000057/uni00000048/uni00000046/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni0000001a/uni00000015/uni00000019/uni00000013/uni00000011/uni0000001a/uni00000017/uni00000017 /uni00000013/uni00000011/uni00000017/uni00000016/uni00000016/uni00000013/uni00000011/uni0000001a/uni00000019/uni00000018/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000017/uni00000036/uni00000048/uni00000051/uni00000057/uni00000048/uni00000051/uni00000046/uni00000048/uni00000003/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni00000003/uni00000046/uni00000052/uni00000055/uni00000055/uni00000048/uni00000046/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni0000001c/uni0000001b/uni00000019 /uni00000013/uni00000011/uni0000001c/uni0000001b/uni0000001a/uni00000013/uni00000011/uni0000001c/uni00000019/uni0000001b/uni00000013/uni00000011/uni0000001c/uni0000001b/uni0000001c /uni00000013/uni00000011/uni0000001c/uni0000001c/uni00000013/uni00000026/uni0000004b/uni00000044/uni00000055/uni00000044/uni00000046/uni00000057/uni00000048/uni00000055/uni00000003/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni00000003/uni00000047/uni00000048/uni00000057/uni00000048/uni00000046/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni0000001c/uni0000001b/uni00000018 /uni00000013/uni00000011/uni0000001c/uni0000001b/uni00000018/uni00000013/uni00000011/uni0000001c/uni00000019/uni00000019/uni00000013/uni00000011/uni0000001c/uni0000001b/uni0000001a /uni00000013/uni00000011/uni0000001c/uni0000001b/uni0000001c/uni00000026/uni0000004b/uni00000044/uni00000055/uni00000044/uni00000046/uni00000057/uni00000048/uni00000055/uni00000003/uni0000004f/uni00000048/uni00000059/uni00000048/uni0000004f/uni00000003/uni00000046/uni00000052/uni00000055/uni00000055/uni00000048/uni00000046/uni00000057/uni0000004c/uni00000052/uni00000051 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000037/uni00000052/uni0000005b/uni0000004c/uni00000046/uni0000004f/uni00000052/uni00000044/uni0000004e/uni00000026/uni00000031/uni00000003/uni0000000b/uni00000029/uni00000014/uni00000003/uni00000056/uni00000046/uni00000052/uni00000055/uni00000048/uni0000000c/uni00000013/uni00000011/uni00000019/uni00000014/uni0000001b /uni00000013/uni00000011/uni00000019/uni00000015/uni0000001a /uni00000013/uni00000011/uni00000015/uni00000018/uni0000001b/uni00000013/uni00000011/uni00000019/uni0000001b/uni0000001a/uni00000013/uni00000011/uni0000001a/uni00000017/uni0000001a /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni00000018/uni00000017/uni0000001a/uni00000013/uni00000011/uni00000018/uni0000001b/uni00000018 /uni00000013/uni00000011/uni00000015/uni00000015/uni00000013/uni00000013/uni00000011/uni00000018/uni0000001c/uni0000001c/uni00000013/uni00000011/uni0000001a/uni00000013/uni00000013 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni00000019/uni0000001c/uni00000014 /uni00000013/uni00000011/uni00000019/uni0000001c/uni0000001b /uni00000013/uni00000011/uni00000017/uni0000001c/uni0000001a/uni00000013/uni00000011/uni0000001a/uni00000018/uni00000013/uni00000013/uni00000011/uni0000001a/uni0000001b/uni00000019 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni00000019/uni00000016/uni00000019/uni00000013/uni00000011/uni00000019/uni00000018/uni0000001b /uni00000013/uni00000011/uni00000017/uni00000017/uni0000001b/uni00000013/uni00000011/uni00000019/uni0000001c/uni00000016/uni00000013/uni00000011/uni0000001a/uni00000018/uni00000013 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000026/uni00000032/uni0000002f/uni00000027/uni00000044/uni00000057/uni00000044/uni00000056/uni00000048/uni00000057/uni00000003/uni0000000b/uni00000024/uni00000046/uni00000046/uni00000058/uni00000055/uni00000044/uni00000046/uni0000005c/uni0000000c/uni00000013/uni00000011/uni0000001a/uni0000001c/uni00000016/uni00000013/uni00000011/uni0000001b/uni00000017/uni00000019 /uni00000013/uni00000011/uni00000017/uni0000001c/uni00000017/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000017/uni00000013/uni00000011/uni0000001b/uni0000001b/uni00000017 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni0000001a/uni00000015/uni00000013/uni00000013/uni00000011/uni0000001b/uni00000013/uni00000018 /uni00000013/uni00000011/uni00000017/uni00000018/uni0000001c/uni00000013/uni00000011/uni0000001a/uni00000015/uni00000015/uni00000013/uni00000011/uni0000001b/uni00000019/uni00000018 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni0000001c/uni0000001b/uni00000019 /uni00000013/uni00000011/uni0000001c/uni0000001b/uni0000001c /uni00000013/uni00000011/uni0000001c/uni0000001a/uni00000016 /uni00000013/uni00000011/uni0000001c/uni0000001b/uni0000001b /uni00000013/uni00000011/uni0000001c/uni0000001c/uni00000016 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni0000001c/uni0000001b/uni00000017 /uni00000013/uni00000011/uni0000001c/uni0000001b/uni0000001b /uni00000013/uni00000011/uni0000001c/uni0000001a/uni00000015 /uni00000013/uni00000011/uni0000001c/uni0000001b/uni00000018 /uni00000013/uni00000011/uni0000001c/uni0000001c/uni00000015 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000013/uni00000011/uni00000015/uni00000013/uni00000011/uni00000017/uni00000013/uni00000011/uni00000019/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000011/uni00000013/uni00000026/uni00000032/uni0000002f/uni00000027/uni00000044/uni00000057/uni00000044/uni00000056/uni00000048/uni00000057/uni00000003/uni0000000b/uni00000029/uni00000014/uni00000003/uni00000056/uni00000046/uni00000052/uni00000055/uni00000048/uni0000000c/uni00000013/uni00000011/uni0000001b/uni00000014/uni00000015/uni00000013/uni00000011/uni0000001b/uni0000001a/uni00000014 /uni00000013/uni00000011/uni00000018/uni00000017/uni00000013/uni00000013/uni00000011/uni0000001b/uni00000014/uni0000001c/uni00000013/uni00000011/uni0000001c/uni00000013/uni00000018 /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni0000001a/uni00000016/uni00000016/uni00000013/uni00000011/uni0000001b/uni00000015/uni00000019 /uni00000013/uni00000011/uni00000017/uni0000001c/uni0000001b/uni00000013/uni00000011/uni0000001a/uni00000016/uni00000013/uni00000013/uni00000011/uni0000001b/uni0000001b/uni00000018 /uni00000026/uni000000f0/uni00000037/uni00000038/uni00000010/uni0000003a/uni00000033 /uni00000026/uni000000f0/uni00000037/uni00000038/uni00000010/uni00000036/uni00000035 /uni00000026/uni000000f0/uni00000037/uni00000038/uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni0000001b/uni0000001b/uni00000014/uni00000013/uni00000011/uni0000001c/uni00000014/uni00000013 /uni00000013/uni00000011/uni0000001a/uni00000016/uni0000001b/uni00000013/uni00000011/uni0000001b/uni0000001c/uni00000018/uni00000013/uni00000011/uni0000001c/uni00000016/uni0000001a /uni00000010/uni00000025/uni00000028/uni00000035/uni00000037 /uni00000010/uni0000002f/uni0000002f/uni00000030/uni00000013/uni00000011/uni0000001b/uni00000018/uni0000001c/uni00000013/uni00000011/uni0000001c/uni00000013/uni00000013 /uni00000013/uni00000011/uni0000001a/uni00000015/uni00000013/uni00000013/uni00000011/uni0000001b/uni0000001a/uni00000014/uni00000013/uni00000011/uni0000001c/uni00000016/uni00000016 Figure 5: Full results of ablation study. 18 NeurIPS Paper Checklist 1.Claims Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope? Answer: [Yes]
https://arxiv.org/abs/2505.22184v1
Justification: The key claims we make in the abstract and introduction accurately reflect the contribution and scope of the paper. Guidelines: •The answer NA means that the abstract and introduction do not include the claims made in the paper. •The abstract and/or introduction should clearly state the claims made, including the contributions made in the paper and important assumptions and limitations. A No or NA answer to this question will not be perceived well by the reviewers. •The claims made should match theoretical and experimental results, and reflect how much the results can be expected to generalize to other settings. •It is fine to include aspirational goals as motivation as long as it is clear that these goals are not attained by the paper. 2.Limitations Question: Does the paper discuss the limitations of the work performed by the authors? Answer: [Yes] Justification: We discuss the limitations in Appendix B. Guidelines: •The answer NA means that the paper has no limitation while the answer No means that the paper has limitations, but those are not discussed in the paper. • The authors are encouraged to create a separate "Limitations" section in their paper. •The paper should point out any strong assumptions and how robust the results are to violations of these assumptions (e.g., independence assumptions, noiseless settings, model well-specification, asymptotic approximations only holding locally). The authors should reflect on how these assumptions might be violated in practice and what the implications would be. •The authors should reflect on the scope of the claims made, e.g., if the approach was only tested on a few datasets or with a few runs. In general, empirical results often depend on implicit assumptions, which should be articulated. •The authors should reflect on the factors that influence the performance of the approach. For example, a facial recognition algorithm may perform poorly when image resolution is low or images are taken in low lighting. Or a speech-to-text system might not be used reliably to provide closed captions for online lectures because it fails to handle technical jargon. •The authors should discuss the computational efficiency of the proposed algorithms and how they scale with dataset size. •If applicable, the authors should discuss possible limitations of their approach to address problems of privacy and fairness. •While the authors might fear that complete honesty about limitations might be used by reviewers as grounds for rejection, a worse outcome might be that reviewers discover limitations that aren’t acknowledged in the paper. The authors should use their best judgment and recognize that individual actions in favor of transparency play an impor- tant role in developing norms that preserve the integrity of the community. Reviewers will be specifically instructed to not penalize honesty concerning limitations. 3.Theory assumptions and proofs Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof? Answer: [Yes] 19 Justification: We prove the Theorem 1 mentioned in 3.4.2 in Appendix C. Guidelines: • The answer NA means that the paper does not include theoretical results. •All the theorems, formulas, and proofs in
https://arxiv.org/abs/2505.22184v1
the paper should be numbered and cross- referenced. •All assumptions should be clearly stated or referenced in the statement of any theorems. •The proofs can either appear in the main paper or the supplemental material, but if they appear in the supplemental material, the authors are encouraged to provide a short proof sketch to provide intuition. •Inversely, any informal proof provided in the core of the paper should be complemented by formal proofs provided in appendix or supplemental material. • Theorems and Lemmas that the proof relies upon should be properly referenced. 4.Experimental result reproducibility Question: Does the paper fully disclose all the information needed to reproduce the main ex- perimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)? Answer: [Yes] Justification: All models we use are open-source. We provide our code, dataset we use, and clearly state our approach, and the results in our paper are reproducible. Guidelines: • The answer NA means that the paper does not include experiments. •If the paper includes experiments, a No answer to this question will not be perceived well by the reviewers: Making the paper reproducible is important, regardless of whether the code and data are provided or not. •If the contribution is a dataset and/or model, the authors should describe the steps taken to make their results reproducible or verifiable. •Depending on the contribution, reproducibility can be accomplished in various ways. For example, if the contribution is a novel architecture, describing the architecture fully might suffice, or if the contribution is a specific model and empirical evaluation, it may be necessary to either make it possible for others to replicate the model with the same dataset, or provide access to the model. In general. releasing code and data is often one good way to accomplish this, but reproducibility can also be provided via detailed instructions for how to replicate the results, access to a hosted model (e.g., in the case of a large language model), releasing of a model checkpoint, or other means that are appropriate to the research performed. •While NeurIPS does not require releasing code, the conference does require all submis- sions to provide some reasonable avenue for reproducibility, which may depend on the nature of the contribution. For example (a)If the contribution is primarily a new algorithm, the paper should make it clear how to reproduce that algorithm. (b)If the contribution is primarily a new model architecture, the paper should describe the architecture clearly and fully. (c)If the contribution is a new model (e.g., a large language model), then there should either be a way to access this model for reproducing the results or a way to reproduce the model (e.g., with an open-source dataset or instructions for how to construct the dataset). (d)We recognize that reproducibility may be tricky in some cases, in which case authors are welcome to describe the particular way they provide for reproducibility. In the case of closed-source models, it may be that access
https://arxiv.org/abs/2505.22184v1
to the model is limited in some way (e.g., to registered users), but it should be possible for other researchers to have some path to reproducing or verifying the results. 5.Open access to data and code Question: Does the paper provide open access to the data and code, with sufficient instruc- tions to faithfully reproduce the main experimental results, as described in supplemental material? 20 Answer: [Yes] Justification: We provide our code, as well as links to public datasets and models. Guidelines: • The answer NA means that paper does not include experiments requiring code. •Please see the NeurIPS code and data submission guidelines ( https://nips.cc/ public/guides/CodeSubmissionPolicy ) for more details. •While we encourage the release of code and data, we understand that this might not be possible, so “No” is an acceptable answer. Papers cannot be rejected simply for not including code, unless this is central to the contribution (e.g., for a new open-source benchmark). •The instructions should contain the exact command and environment needed to run to reproduce the results. See the NeurIPS code and data submission guidelines ( https: //nips.cc/public/guides/CodeSubmissionPolicy ) for more details. •The authors should provide instructions on data access and preparation, including how to access the raw data, preprocessed data, intermediate data, and generated data, etc. •The authors should provide scripts to reproduce all experimental results for the new proposed method and baselines. If only a subset of experiments are reproducible, they should state which ones are omitted from the script and why. •At submission time, to preserve anonymity, the authors should release anonymized versions (if applicable). •Providing as much information as possible in supplemental material (appended to the paper) is recommended, but including URLs to data and code is permitted. 6.Experimental setting/details Question: Does the paper specify all the training and test details (e.g., data splits, hyper- parameters, how they were chosen, type of optimizer, etc.) necessary to understand the results? Answer: [Yes] Justification: We provide dataset details in Appendix E and the experimental details are followed the procedure in Section 3 without any hyperparameters and optimizer, as our method is totally training-free and prompt-free. The prompt template we use for prompt- based LLM method is presented in Appendix G. Guidelines: • The answer NA means that the paper does not include experiments. •The experimental setting should be presented in the core of the paper to a level of detail that is necessary to appreciate the results and make sense of them. •The full details can be provided either with the code, in appendix, or as supplemental material. 7.Experiment statistical significance Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments? Answer: [No] Justification: Local results are deterministic via fixed random seeds, while API evaluations (Deepseek-V3) used single runs due to cost limitations. Guidelines: • The answer NA means that the paper does not include experiments. •The authors should answer "Yes" if the results are accompanied by error bars, confi- dence intervals, or statistical significance tests, at least for the experiments that support
https://arxiv.org/abs/2505.22184v1
the main claims of the paper. •The factors of variability that the error bars are capturing should be clearly stated (for example, train/test split, initialization, random drawing of some parameter, or overall run with given experimental conditions). 21 •The method for calculating the error bars should be explained (closed form formula, call to a library function, bootstrap, etc.) • The assumptions made should be given (e.g., Normally distributed errors). •It should be clear whether the error bar is the standard deviation or the standard error of the mean. •It is OK to report 1-sigma error bars, but one should state it. The authors should preferably report a 2-sigma error bar than state that they have a 96% CI, if the hypothesis of Normality of errors is not verified. •For asymmetric distributions, the authors should be careful not to show in tables or figures symmetric error bars that would yield results that are out of range (e.g. negative error rates). •If error bars are reported in tables or plots, The authors should explain in the text how they were calculated and reference the corresponding figures or tables in the text. 8.Experiments compute resources Question: For each experiment, does the paper provide sufficient information on the com- puter resources (type of compute workers, memory, time of execution) needed to reproduce the experiments? Answer: [Yes] Justification: We provide the computational resources required for the experiments in Section 4.1. Guidelines: • The answer NA means that the paper does not include experiments. •The paper should indicate the type of compute workers CPU or GPU, internal cluster, or cloud provider, including relevant memory and storage. •The paper should provide the amount of compute required for each of the individual experimental runs as well as estimate the total compute. •The paper should disclose whether the full research project required more compute than the experiments reported in the paper (e.g., preliminary or failed experiments that didn’t make it into the paper). 9.Code of ethics Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines ? Answer: [Yes] Justification: Yes, the research conducted in the paper fully conforms to the NeurIPS Code of Ethics in every respect. Guidelines: •The answer NA means that the authors have not reviewed the NeurIPS Code of Ethics. •If the authors answer No, they should explain the special circumstances that require a deviation from the Code of Ethics. •The authors should make sure to preserve anonymity (e.g., if there is a special consid- eration due to laws or regulations in their jurisdiction). 10.Broader impacts Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed? Answer: [Yes] Justification: We discuss social impacts in Appendix A. Guidelines: • The answer NA means that there is no societal impact of the work performed. •If the authors answer NA or No, they should explain why their work has no societal impact or why the paper does not address societal impact. 22 •Examples of negative societal impacts include potential malicious or unintended uses
https://arxiv.org/abs/2505.22184v1
(e.g., disinformation, generating fake profiles, surveillance), fairness considerations (e.g., deployment of technologies that could make decisions that unfairly impact specific groups), privacy considerations, and security considerations. •The conference expects that many papers will be foundational research and not tied to particular applications, let alone deployments. However, if there is a direct path to any negative applications, the authors should point it out. For example, it is legitimate to point out that an improvement in the quality of generative models could be used to generate deepfakes for disinformation. On the other hand, it is not needed to point out that a generic algorithm for optimizing neural networks could enable people to train models that generate Deepfakes faster. •The authors should consider possible harms that could arise when the technology is being used as intended and functioning correctly, harms that could arise when the technology is being used as intended but gives incorrect results, and harms following from (intentional or unintentional) misuse of the technology. •If there are negative societal impacts, the authors could also discuss possible mitigation strategies (e.g., gated release of models, providing defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms to monitor how a system learns from feedback over time, improving the efficiency and accessibility of ML). 11.Safeguards Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)? Answer: [NA] Justification: We didn’t provide a new pre-trained model or a new dataset. Guidelines: • The answer NA means that the paper poses no such risks. •Released models that have a high risk for misuse or dual-use should be released with necessary safeguards to allow for controlled use of the model, for example by requiring that users adhere to usage guidelines or restrictions to access the model or implementing safety filters. •Datasets that have been scraped from the Internet could pose safety risks. The authors should describe how they avoided releasing unsafe images. •We recognize that providing effective safeguards is challenging, and many papers do not require this, but we encourage authors to take this into account and make a best faith effort. 12.Licenses for existing assets Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected? Answer: [Yes] Justification: Yes, the paper properly credits the creators or original owners of assets and respects the license and terms of use. Guidelines: • The answer NA means that the paper does not use existing assets. • The authors should cite the original paper that produced the code package or dataset. •The authors should state which version of the asset is used and, if possible, include a URL. • The name of the license (e.g., CC-BY 4.0) should be included for each asset. •For scraped data from a particular source (e.g., website), the copyright and terms of service of that source should be provided. 23
https://arxiv.org/abs/2505.22184v1
•If assets are released, the license, copyright information, and terms of use in the package should be provided. For popular datasets, paperswithcode.com/datasets has curated licenses for some datasets. Their licensing guide can help determine the license of a dataset. •For existing datasets that are re-packaged, both the original license and the license of the derived asset (if it has changed) should be provided. •If this information is not available online, the authors are encouraged to reach out to the asset’s creators. 13.New assets Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets? Answer: [Yes] Justification: The paper provides the anonymized URL of source code in the abstract. Guidelines: • The answer NA means that the paper does not release new assets. •Researchers should communicate the details of the dataset/code/model as part of their submissions via structured templates. This includes details about training, license, limitations, etc. •The paper should discuss whether and how consent was obtained from people whose asset is used. •At submission time, remember to anonymize your assets (if applicable). You can either create an anonymized URL or include an anonymized zip file. 14.Crowdsourcing and research with human subjects Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)? Answer: [NA] Justification: The paper does not involve crowdsourcing nor research with human subjects. Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Including this information in the supplemental material is fine, but if the main contribu- tion of the paper involves human subjects, then as much detail as possible should be included in the main paper. •According to the NeurIPS Code of Ethics, workers involved in data collection, curation, or other labor should be paid at least the minimum wage in the country of the data collector. 15.Institutional review board (IRB) approvals or equivalent for research with human subjects Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained? Answer: [NA] Justification: The paper does not involve crowdsourcing nor research with human subjects. Guidelines: •The answer NA means that the paper does not involve crowdsourcing nor research with human subjects. •Depending on the country in which research is conducted, IRB approval (or equivalent) may be required for any human subjects research. If you obtained IRB approval, you should clearly state this in the paper. 24 •We recognize that the procedures for this may vary significantly between institutions and locations, and we expect authors to adhere to the NeurIPS Code of Ethics and the guidelines for their institution. •For initial submissions, do not include any information that would break anonymity (if applicable), such as the institution conducting the review. 16.Declaration of LLM usage Question: Does the paper describe the usage of
https://arxiv.org/abs/2505.22184v1
LLMs if it is an important, original, or non-standard component of the core methods in this research? Note that if the LLM is used only for writing, editing, or formatting purposes and does not impact the core methodology, scientific rigorousness, or originality of the research, declaration is not required. Answer: [NA] Justification: We only use LLMs to improve writing. Guidelines: •The answer NA means that the core method development in this research does not involve LLMs as any important, original, or non-standard components. •Please refer to our LLM policy ( https://neurips.cc/Conferences/2025/LLM ) for what should or should not be described. 25
https://arxiv.org/abs/2505.22184v1
arXiv:2505.22193v1 [quant-ph] 28 May 2025Physics-inspired Generative AI models via real hardware-based noisy quantum diffusion Marco Parigi1, Stefano Martina1,2*, Francesco Aldo Venturelli1,3, Filippo Caruso1,2 1Department of Physics and Astronomy, University of Florence, Via Sansone 1, Sesto Fiorentino, 50019, Florence, Italy. 2LENS - European Laboratory for Non-Linear Spectroscopy, University of Florence, Via Nello Carrara 1, Sesto Fiorentino, 50019, Florence, Italy. 3Department of Engineering, University Pompeu Fabra, T` anger 122-140, Barcelona, 08018, Barcelona, Spain. *Corresponding author(s). E-mail(s): stefano.martina@unifi.it; Contributing authors: marco.parigi@unifi.it; francescoaldo.venturelli@upf.edu; filippo.caruso@unifi.it; Abstract Quantum Diffusion Models (QDMs) are an emerging paradigm in Generative AI that aims to use quantum properties to improve the performances of their classical counterparts. However, existing algorithms are not easily scalable due to the limitations of near-term quantum devices. Following our previous work on QDMs, here we propose and implement two physics-inspired protocols. In the first, we use the formalism of quantum stochastic walks, showing that a specific interplay of quantum and classical dynamics in the forward process produces statistically more robust models generating sets of MNIST images with lower Fr´ echet Inception Distance (FID) than using totally classical dynamics. In the second approach, we realize an algorithm to generate images by exploiting the intrinsic noise of real IBM quantum hardware with only four qubits. Our work could be a starting point to pave the way for new scenarios for large-scale algorithms in quantum Generative AI, where quantum noise is neither mitigated nor corrected, but instead exploited as a useful resource. Keywords: Generative Diffusion Models, Quantum Machine Learning, Quantum Noise, Quantum Computing, Quantum Stochastic Walks. 1 Introduction Generative Artificial Intelligence (GenAI) is one of the most interesting recent research fields that uses Machine Learning (ML) models capable of learning the underlying structure from a finite set of samples to create new, original and meaning- ful content such as images, text, or other forms of data. Nowadays, GenAI technology is usedboth in academic and industrial applications to find new, creative, and efficient solutions to real- life problems. Over the years, different generative models have been proposed such as Generative Adversarial Networks (GANs) [1], Variational Auto-Encoders (VAEs) [2] and Normalizing Flows (NFs) [3], which have shown great success in gen- erating high-quality novel data. However, Denois- ing Probabilistic Diffusion Models [4, 5] (or 1 simply Diffusion Models (DMs)) have recently achieved state-of-the-art performance by over- coming previous models in generative tasks, for instance, in images and audio synthesis [6–8]. DMs have been introduced by Sohl-Dickstein et al. [4] and are inspired by the physical phe- nomenon of non-equilibrium thermodynamics, i.e. diffusion . The generic pipeline of DMs consists of two Markov chains that are called forward (or diffusion) and backward (or denoising). In the for- ward chain, classical noise is injected by means of a stochastic process into the training samples until they become totally noisy. In the backward chain, Artificial Neural Networks (ANNs) are trained to iteratively remove the aforementioned perturba- tion to reverse the forward process, so as to learn the unknown distribution of the training samples and thus generate new samples. Currently, DMs are widely adopted in computer vision
https://arxiv.org/abs/2505.22193v1
tasks [9– 11], text generation [12], sequential data model- ing [13], audio synthesis [8], and are one of the fundamental elements of famous and widespread GenAI technologies such as Stable Diffusion [14], DALL-E 4 [15] and Diffwave [8]. On the other hand, quantum computing is a rapidly emerging technology that har- nesses peculiar quantum mechanical phenomena such as superposition ,entanglement and coher- ence to solve complex problems with fewer resources or that are untractable with classical (super)computers. For instance, milestone quan- tum algorithms such as Shor’s factorization [16, 17] and Grover’s search [18] have exponential and quadratic speed-ups , respectively, over their clas- sical counterparts. Moreover, quantum computing promises to achieve a speed-up in simulating quan- tum systems [19–21], solving linear systems of equations [22], and optimization tasks [23, 24]. However, these algorithms require fault-tolerant quantum processors [25], i.e. hardware with a large number of error-corrected qubits. Consequently, they are not feasible on currently available Noisy Intermediate-Scale Quantum (NISQ) devices [26] that use Quantum Processing Units (QPUs) [27, 28] composed of a few hundreds of qubits highly prone to quantum noise. In order to reduce the effects of noise, quantum error correction tech- niques can be applied, which, however, require an elevated number of physical qubits [29, 30]. In this context, Google Quantum AI has recently developed a new quantum device called Willowthat seems to allow exponential reduction of errors while increasing the number of qubits [31]. An active area of research in quantum com- putation involves algorithms based on Quantum Walks (QWs). QWs have been formally intro- duced by Aharonov et al. [32] as the quan- tum mechanical counterpart of Classical Random Walks (CRWs) and are exploited in many quan- tum protocols today. It has been shown that QWs outperform CRWs, for example, in search algorithms [33–35], transport phenomena [36–38], secure communications and cryptography proto- cols [39, 40], and distinguishing hard graphs [41]. QWs can also be used as a primitive of a univer- sal model for quantum computation [42–44] and can be implemented efficiently by physical exper- iments [45–47] and quantum processors [48–50]. QWs are part of a more general family: Quantum Stochastic Walks (QSWs) [51] allow- ing one to describe the evolution of a quantum mechanical walker by means of quantum stochas- tic equations of motion and to generalize also classical random walkers. Quantum Machine Learning (QML) is an emerging field that integrates quantum computing and ML techniques [52–54]. However, due to the limitation of NISQ devices, many QML algorithms are usually applied to toy problems where data are reduced in terms of the number of features, or integrated with classical models implementing the so-called hybrid quantum-classical algorithms (or NISQ algorithms) [55, 56]. Recently, a plethora of these algorithms have been proposed in dif- ferent applications, for example: image process- ing [57, 58], quantum chemistry [59, 60], combi- natorial and optimization problems [61], search- ing algorithms [62], machine learning tasks [63]. In the context of Quantum Generative Artifi- cial Intelligence (QGenAI), previous works gen- eralize classical GenAI models into the quan- tum domain: Quantum Generative Adversarial Networks (QGANs) [64,
https://arxiv.org/abs/2505.22193v1
65], QVAE [66], and Quantum Diffusion Models (QDMs) [67]. An interesting aspect of QGenAI models is that they allow the integration of computational proto- cols with physical quantum devices. For example, QGANs have recently been realized using a silicon quantum photonic chip [68]. Moreover, a practi- calquantum advantage in generative modeling has been demonstrated in the data-limited scenario, 2 comparing quantum-against-classical generative algorithms [69]. Concerning QDMs, numerical simulations show how the design of quantum-classical algo- rithms can improve the quality of generated data [70, 71], learn complex quantum distribu- tions [72], reduce the number of trainable param- eters in the denoising ANN [73], and potentially achieve sampling speeds faster than those of classi- cal DMs [74]. However, one of the main challenges of these algorithms is their scalability on near- term quantum processors. In fact, the currently proposed QDMs approaches are usually imple- mented in simplified scenarios [67, 75, 76], or using pre-processing techniques [70] and classi- cal latent models [77] to reduce the dimensional representation of the data. In addition, the idea of harnessing quantum noise to corrupt data in the forward diffusion process has been explored in simulated scenarios through quantum noise channels [67, 76, 78]. In this work, we first study the performances in image generation of DMs when classical dif- fusion is replaced or integrated with quantum stochastic dynamics in the forward process. In par- ticular, using the formalism of QSWs we show that a specific interplay of quantum-classical stochas- tic dynamics improves image generation quality, leading to lower Fr´ echet inception distance (FID) values between real and generated samples, and the hybrid model is also statistically more robust than the classical DMs. Then, in the second part we implement a QW dynamics on quantum cir- cuit and exploit the intrinsic noise of a real IBM quantum processor to generate the MNIST dataset. 2 Results 2.1 Quantum, hybrid and classical stochastic diffusion In classical DMs, the forward process maps an unknown initial data distribution q(x0) into a final well-known distribution π(xT) by a Markov chain that transforms the initial samples x0∼q(x0) into pure noise samples xT∼π(xT) after Ttime steps. In this process, the features of the samples are mathematically represented as classical ran- dom walkers undergoing stochastic dynamics [5]. 123 123 123 q(x0) q(x1) π(x2)x1∼q(x1|x0) x0∼p(x0|x1)x2∼q(x2|x1) x1∼p(x1|x2)Fig. 1 : Example of DMs for discrete data. An ini- tial data distribution q(x0) is transformed into a uniform categorical distribution π(x2) after T= 2 time steps. The forward transition kernel is q(xt|xt−1), while p(xt−1|xt) is the transition ker- nel of the backward process obtained by training an ANN. For a more detailed description of the DMs, see Section 4. Next, we consider the family of DMs for dis- crete categorical data. In this framework, a sample is a discrete scalar K-categorical random variable Xthat takes the value xt∈1, . . . , K at the time stept∈[0, T] [12, 79]. In the following, we denote byxthe one-hot version of x, that is, a vector whose elements for the category karexk= 1 and xj= 0 for j̸=k. In the forward, the
https://arxiv.org/abs/2505.22193v1
sample xt at time tis obtained drawing from the transition kernel: xt∼q(xt|xt−1), (1) q(xt|xt−1) = Cat( xt;p=xt−1Qt), (2) where Cat( x;p) is the categorical distribution sampling the one-hot row vector xwith probabil- ityp,xt−1is the sample at time t−1, and Qt is the matrix that contains the transition proba- bilities of Xfrom one category to another one at time t. The diffusion transition chain after Ttime steps is given by: q(x0:T) =TY t=1q(xt|xt−1). (3) In Fig. 1 we show an example of the forward and backward process for a 3-categorical ran- dom variable X. For a further description, see Section 4. In this section, we study the performance in the image generation task when the classi- cal stochastic dynamics in the forward chain are replaced or interplay with quantum stochastic dif- fusion processes. For this purpose, we decide to 3 adopt the formalism of QSWs that provides a useful tool to study the transition from classi- cal to quantum diffusion dynamics. This decision is inspired by previous works where the QSW formalism is used to find an optimal mixing of classical and quantum dynamics for information transport [36–38]. Formally, a continuous-time QSW dynamics is described by the Kossakowski–Lindblad-Gorini master equation [80–82]: dρ dt= (1−ω)i[H, ρ] +ωX jLjρL† j−1 2{LjL† j, ρ}, (4) where ρis the walker density matrix, His the Hamiltonian of the system describing the coherent evolution, and Ljare Lindblad operators respon- sible for the incoherent dynamics, which represent the interactions of the system with an external environment. The continuous parameter ω∈[0,1] quantifies the interaction between coherent and incoherent evolution. For ω= 0, Eq. (4) describes the evolution of a pure QW. Instead, setting ω= 1 and choosing Lij=Sij|i⟩⟨j|, Eq. (4) describes the motion of a CRW where |i⟩is the quantum basis state associated with node iof a graph and Sij is the transition matrix of the walker from node jtoi. A more complete description on QWs and CRWs is given in Section 4. In the following, a categorical data sample is represented by a quantum stochastic walker that evolves on a cycle graph of 8 nodes, which repre- sent the categories of the sample. In this approach, the density operator ρof Eq. (4) describes the state evolution of the walker on the graph: the diagonal elements ρiigive the probabilities that the walker is at the node i, while the off-diagonal elements ρijdescribe and contain information on the peculiar quantum mechanical effects (coher- ence) during the diffusion process. Next, we refer to the diagonal and off-diagonal elements of the density matrix ρ, respectively, as populations and coherences . In Fig. 2 we show the evolution of the Kullback–Leibler (KL) divergence between the populations of the quantum stochastic walker on a cycle graph of 8 nodes and the corresponding uniform distribution for different values of ω. In particular, we can observe that for pure quantum diffusion dynamics ( ω= 0) the KL divergencemanifests an oscillating behavior, while in the clas- sical case ( ω= 1) it smoothly converges to zero. In hybrid scenarios, with ω∈(0,1), the presence of the incoherent term of Eq.
https://arxiv.org/abs/2505.22193v1
(4) dampens the oscil- lations of the pure quantum case, leading to faster convergence with respect to the classical case, for example, for ω= 0.4,0.6. 0246810121416182000.511.52 tKLω= 0 ω= 0.2 ω= 0.4 ω= 0.6 ω= 0.8 ω= 1 Fig. 2 : The KL divergence between the popula- tions of a single QSW and the uniform distribution on a cycle graph of 8 nodes for different value of ω after T= 20 time steps. The walker is initially in the node 0, corresponding to the state ρ=|0⟩⟨0|, and its evolution over the graph is obtained by solving the equation Eq. (4). 2.2 Image generation Starting from the results of the previous section, a natural question arises: How do the different QSWs dynamics impact the generation of new samples? To answer this question, we choose to perform image generation on the MNIST [83] dataset, scaling the pixel grayscale levels from [0,255] to [0 ,7]. We propose a forward dynam- ics, illustrated in Fig. 3, where each pixel of the image is an independent QSW on a cycle graph of 8 nodes (one for each gray intensity value of the pixel). For simplicity, in the following we describe the forward process procedure for a single QSW. The walker is initially at the node i0, where i0= 0,1, . . . , 7, and is described by the quantum state ρ0=|i0⟩⟨i0|. Subsequently, the state of the walker evolves by Eq. (4), and we collect the populations 4 Quantum Stochastic Walker Pixel 0 7 6 5 4 321 i =70ρ0 ρt i =6tFig. 3 : Illustration of the model. Each pixel of an image sample represents an independent quantum stochastic walker moving on a cycle graph of 8 nodes that correspond to the gray intensity values. The walker moves on the graph by Eq. (4). of the states ρ1, . . . , ρ T, and use them to define categorical distributions of the forward from which we sample the positions of the walker at each time step: it∼Cat(p= diag( ρt−1)), (5) ρt=|it⟩⟨it|, (6) where itis the walker position on the graph at time t, and ρtthe corresponding quantum state. The backward process is implemented with a Multilayer Perceptron (MLP) that takes as input the one-hot encoding of the positions of the walk- ers at time tfor all pixels of the image, and is trained to predict the position at time t−1 for them. For details on the model, the loss function, and the training procedure, see Section 4. In order to evaluate the generation perfor- mances for different quantum stochastic dynam- ics, we compute the FID metric [84] that assesses the quality of images created by a generative model. More precisely, the FID metric calcu- lates the distance between the original and the generated datasets, and is given by: FID = ||µ−µ′||2 2+ tr Σ + Σ′−2(ΣΣ′)1 2 ,(7) 00.10.20.30.40.50.60.70.80.9 150100150200250300 ωFIDFig. 4 : The box plot of FID value for different value of ω. Every box plot is obtained from 10 different repetitions of for the same value of ωwith T= 20, 8-cycle graph. Mean
https://arxiv.org/abs/2505.22193v1
and standard error of the mean are also reported. The plot show how the hybrid quantum-classical diffusion dynamics (ω= 0.3) results to generate statistically better image datasets. where µandµ′are, respectively, the mean of the multivariate normal distribution of the features of the original and generated image dataset, with Σ and Σ′the corresponding variances. A higher value of FID indicates a poorer performance of the generative model. Moreover, to statistically assess the performance of different models, we implement 10 simulations for each value of ω∈ {0,0.1,0.2, . . . , 1}, and report in Fig. 4 the box plot visualization of the FID values of the gen- erated digit 0 (box plots require at least 5 sam- ples to provide solid statistical information [85]). We observe that for a hybrid quantum-classical stochastic dynamics ( ω= 0.3), the mean value of the FID is lower than in the classical case (ω= 1). Moreover, the box plot shows that in the hybrid scenario the FID values of the 10 simula- tions are closely distributed around the median, while in the classical case most of the FID val- ues are above it. In addition, all simulations for ω= 0.3, except for the single upper outlier, result in better FIDs than half of the runs for ω= 1. This means that the model with a specific interplay of quantum-classical dynamics is able to steadily generate better samples and is also statistically more robust than the classical one. 5 t=0 t=1 t=2 0 50.000.250.500.751.00 0 5 0 5KL=1.320 KL=0.701 KL=0.416 0 50.000.250.500.751.00 0 5 0 5FID=200 FID=114 FID=129 KL=0.372 KL=1.943 KL=0.000t=0 t=1 t=2 t=0 t=1 t=2 0 50.000.250.500.751.00 0 5 0 5KL=1.320 KL=0.734 KL=0.480 0 50.000.250.500.751.00 0 5 0 5KL=0.003 KL=0.203 KL=0.4130 50.000.250.500.751.00 0 5 0 5KL=1.320 KL=0.793 KL=0.547 0 50.000.250.500.751.00 0 5 0 5KL=0.001 KL=0.215 KL=0.843Quantum (ω=0) Hybrid (ω=0.3) Classical (ω=1) Forward process Forward process Forward process Generation process Generation process Generation process Fig. 5 : Image generation with QSW-based DMs via a quantum ( ω= 0), hybrid ( ω= 0.3) and a classical (ω= 1) forward chain of T= 20 time steps. We use the models with the inferior median values of FID between the 10 repetitions of Fig. 4. We report the first two steps of the forward chain (in the first row) of 9 different samples, and the evolution of the distribution of the pixel values for the entire dataset of digits 0 of MNIST (blue bins in the second row) that is compared with the final uniform prior (orange bins in the second row). The KL divergence value between the two distributions is reported on the top. We also illustrate the final two steps of 9 generated samples, and the FID score between the entire training dataset and generated dataset is reported (third row). In the last row, we compare the distribution of the pixel values of the entire generated dataset (blue) with the training dataset distribution (orange), and KL divergence between them. Finally, in Fig. 5 we show 9 random samples of generated images of the digit 0 using quantum, hybrid and
https://arxiv.org/abs/2505.22193v1
classical stochastic dynamics. 2.3 Implementation on NISQ Devices A hybrid QSW dynamics can be interpreted as a QW interacting with an external environment introducing noise. In this section, we therefore per- form image generation by implementing a QW that exploits the intrinsic noise of a NISQ device in the forward process. This dynamic is efficiently implemented on a cycle graph with the quantum circuit of Razzoli et al. [50]. The efficiency of the algorithm allows us to modulate the amount of noise introduced in the forward chain, which has low values by default and is increased by delays. As in Section 2.2, we represent each pixel by aQW moving on a cycle graph of 8 nodes. Fur- thermore, the rotation-invariant property of the cycle graph allows us to run each walker as if starting from the same initial state, and then re- mapping the outcome to the specific value of the pixel color by a shift operation. In this way, the forward chain is run only once for all the QWs, a condition that is necessary to achieve our algo- rithm on the limited availability of the current quantum processors. More precisely, our model requires only 4 qubits to generate 28 ×28 grayscale MNIST images (normalized to 8 gray values): 3 qubits for the position of the QW on the 8 nodes of the graph and 1 qubit for the coin’s degree of freedom of the quantum walker. The mapping of the pixel intensity values into the positions on the graph allows us to introduce quantum effects in the pixel dynamics, as well as to make our model scalable in image size with respect to other QDMs 6 Repeat t∈[1, T]delay|qv 0⟩ H P†/parenleftbig2π 2/parenrightbig F† |qv 1⟩ H P†/parenleftbig2π 4/parenrightbig P/parenleftbig2π 2/parenrightbig |qv 2⟩ H P†/parenleftbig2π 8/parenrightbig P/parenleftbig2π 4/parenrightbig |qa 3⟩ HFig. 6 : Implementation of discrete time QW on cycle graph using quantum circuits. The HandP are respectively Hadamard and phase gates, and Fis the quantum Fourier transform. The outer shaded block indicates a single time step of the walker and is repeated for ttimes up to T. The inner shaded block modulates the amount of noise in the circuit by a delay operation scaled as in Eq. (8). The final measurements are also shown. approaches for generating images [70, 77]. In Fig. 6 we depict the quantum circuit used for the imple- mentation of the QW. At the end of the circuit we collect measurements on the position qubits to obtain the distribution of the walker at every t∈[1, T]. The latter is used to define the categori- cal function from which we draw the new positions of the walker on the graph. The noise in the circuit is injected with delay operations expressed in seconds by: delay =c×scaling ×dt, (8) scaling =$ sin2(π 2t T−1) 8% 8, (9) where the value of scaling is truncated to the near- est integer multiple of 8 for hardware reasons, and dt= 5·10−10is the time length in seconds of a sin- gle operation on the used devices. The value of c is
https://arxiv.org/abs/2505.22193v1
a coefficient used to guarantee the convergence of the forward process to the uniform distribu- tion within the a priori fixed number of time steps T= 20, and we choose c= 5·104. This scal- ing of the injected noise is chosen in analogy to the cosine schedule of noise in the classical DM of Nichols et al. [86]. The backward of the QW-based diffusion model is implemented with a MLP anal- ogous to the one used for the QSW-based case of Section 2.2. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126Fig. 7 : Qubit connectivity map of the real QPU ibmbrisbane (courtesy of IBM [88]). Every circle represents a qubit, and lines represent their con- nections. Colors code the readout errors (circles) and the errors for the connections (lines). Dark blue indicates a small error, while large one is in purple. As a proof of concept, we train our model with 6 903 full-size 28 ×28 MNIST images of digits 0 with the forward implemented in Qiskit [87] ini- tially simulated on fake brisbane with 105shots and then run on the real device ibmbrisbane with 104shots (the topology is shown in Fig. 7). The number of shots is chosen as the maximum avail- able for the simulator, and reduced considering the computational resources available for the run- ning of the algorithm on the NISQ hardware. This forward protocol is well suited for the used QPU because it has a maximum connectivity degree of 3 and the coin qubit can interact directly with all position qubits. As an additional advantage, it is also possible to run several walkers in parallel on different parts of the same QPU. In Fig. 8 we show the forward and backward processes for image generation implemented on the real IBM machine. The equivalent figure obtained using the simulator is available in the supplementary mate- rials (SM) together with the results on the other digits of MNIST. In the first and third rows are shown, respectively, the quantum forward evolu- tion via QWs (from left to right) and the image generation by a trained classical MLP (from right to left) at different time steps tfor 9 samples of 7 t=10 0 5KL=0.005 0 5KL=1.245t=15 0 5KL=0.002 0 5KL=1.260t=T=20 0 5KL=0.000 0 5KL=1.229t=0 0 50.000.250.500.751.00KL=1.320 0 50.000.250.500.751.00KL=0.076t=1 0 5KL=0.241 0 5KL=1.194t=2 0 5KL=0.147 0 5KL=0.437t=3 0 5KL=0.089 0 5KL=1.112t=4 0 5KL=0.014
https://arxiv.org/abs/2505.22193v1
0 5KL=1.375t=5 0 5KL=0.014 0 5KL=1.016 Forward process Generation processFID=352 Fig. 8 : Images generation with QW-based DM with noise from real ibmbrisbane NISQ device. We report for selected values of tthe evolution of 9 random samples of the real dataset in forward (first row) and 9 random generated samples in backward (third row). In the second row we show the evolution of the pixel values distributions (orange) for the entire training dataset reporting the KL divergence with the uniform distribution (blue). Finally, in the fourth row we show the value distributions of a generated dataset of the same dimension of the training set comparing it with the original dataset using the KL divergence. Also, we report for t= 0 the value of FID equal to 352 between the original and generated dataset. the original and generated dataset. In the second row, the transformation of the initial distribution of the pixel values (in blue) for the full training dataset overlaps with the desired final uniform distribution (in orange) with the KL divergence between the two reported on the top. The final row reports the distribution of the pixel values for 6 903 generated images (in blue) overlapped with the original distribution of the training dataset (in orange) with the KL divergence between the two on top. In the top distribution at t= 0 we report the value of FID= 352 calculated between the original dataset and the equally sized dataset of generated images. Comparing the results of Fig. 5 with Fig. 8, we observe that the latter generates samples with higher values of FID: 114 (hybrid QSW-based) and 352 (IBM-based QW). One pos- sible explanation between the simulated results and the real IBM-based ones could be that the underlying dynamics for the evolution of the pixel values is in discrete time for the QWs which areimplemented with circuits, while it is in continu- ous time for QSWs, resulting in a more gradual introduction of noise during the forward process. 3 Discussion and Conclusions In this work, we show that the impact of quantum noise within the forward DMs dynamics influences the effectiveness of the backward generation pro- cess. In particular, we find that a model with a hybrid QSW diffusion dynamics produces sets of samples with a lower FID and is also statistically more solid with respect to its classical counterpart. Successively, we propose a model that allows one to generate images of any size by harnessing the noise of real NISQ devices taking into account the topology and connectivity of the QPU. More precisely, we generate 28 ×28 gray-scale MNIST images using only 4 qubits by efficiently imple- menting a quantum walk dynamics on a cycle 8 graph whose nodes are the color values of a sin- gle image pixel. The invariant property of the graph allows one to independently run the walk- ers for the single pixels starting from the same initial state and then remapping the outcome to the specific value of the pixel color. Furthermore, our forward protocol enables the implementation of multiple walkers in parallel, requiring a
https://arxiv.org/abs/2505.22193v1
maxi- mum 3 degree of connectivity between qubits of a QPU. This allows for the implementation on the currently available NISQ devices. In conclusion, we show how noise can be used as a resource in the context of QGenAI and not only be a detrimental factor for quantum algo- rithms. Some future research directions can focus on a major integration of the QPU topology with the QW implemented at the circuit level. In par- ticular, we can take more advantages from the connectivity to increase the range of the pixel values, and thus generate high-quality images. Moreover, error correction or mitigation tech- niques could be used to better control the level of noise of the quantum forward to further improve the capabilities of the backward network. Another interesting outlook can be the possibility of a physical realization of the quantum walk making our model directly applied to quantum data, with- out performing any quantum embedding in the first stage of the algorithms and then using a quan- tum ANNs in the reverse process. In conclusion, we believe that the latter can be fruitful where it is necessary to learn unknown quantum phenom- ena, for instance, in quantum sensing, metrology, chemistry, and biology scenarios. 4 Methods 4.1 Diffusion Models Diffusion Models (DMs) are a class of latent variable generative models trained to learn the underlying unknown data distribution of a finite dataset in order to generate new similarly dis- tributed synthetic data. The core idea is to use a classical Markov chain to gradually convert an unknown (data) distribution, called posterior , to a simple well-known distribution, e.g. Gaussian or uniform, called prior . The most generic pipeline of DMs is characterized by a forward (or diffu- sion) and a backward (or denoising) process. In the forward process an initial sample x0∼q(x0)is corrupted in a sequence of Tincreasingly noisy latent variables x1,x2, . . . ,xT. Formally, the dif- fusion process is described by a classical random process via a Markov chain that gradually injects noise on the initial sample x0. More precisely, this is realized by using the Markov transition kernel: q(xt|xt−1) =Kπ(xt|xt−1;βt), (10) q(x) =Z dx′Kπ(x|x′;β)q(x′), (11) where βt∈(0,1) is an hyper-parameter at time tof the model (or in physical terms the diffu- sion rate) describing the level of noise added at each time step, and xtandxt−1are the ran- dom noisy latent variables at the time steps tand t−1, respectively. The scheduling of βtis chosen and fixed such that the initial data distribution q(x0) convergences to a well-known stationary distribution π(xT) in the limit T→ ∞ . The for- ward trajectory after performing Ttime steps of diffusion can be written as: q(x0:T) =TY t=1q(xt|xt−1), (12) where the chain rule of probability and the Markov property of the process are used to factorize the joint distribution q(x0:T). In addition, in the dif- fusion process there are no trainable parameters and therefore it does not involve the use of any learning model. The idea of the backward process is to reverse the forward dynamics moving from a pure noise sample xT∼π(xT) towards a
https://arxiv.org/abs/2505.22193v1
sample of the initial distribution q(x0). The denoising is implemented by an ANN that is trained to learn the reverse trajectory of the diffusion process: p(xT) =π(xT) (13) pθ(x0:T) =p(xT)TY t=1pθ(xt−1|xt), (14) where pθ(xt−1|xt) is a parameterized transi- tion kernel having the same functional form of q(xt|xt−1). A deep ANN, usually with a U-Net architecture [89], is used to estimate the param- eters θat each time step. The denoising network is trained to optimize the negative log-likelihood 9 on the training data writing the evidence lower bound (ELBO) [90] as follow: L=DKL(q(x0:T)||pθ(x0:T)) =−E[logpθ(x0:T)] + const =E" −logp(xT)−X t≥1logpθ(xt|xt−1) q(xt|xt−1)# + const ≥E[−logpθ(x0)] + const , (15) where the Jensen’s inequality holds in the last line, and the DKL(·||·) is the KL divergence, which computes the difference between two probability distributions. The objective of the optimization procedure is to minimize the loss function Lto reduce the difference between the probability dis- tribution q(x0) and the parameterized distribution pθ(x0). 4.2 Diffusion models in discrete state-spaces DMs in discrete state-spaces has been introduced by Sohl-Dickstein et al. for binary random vari- ables [4], and then generalized to categorical ran- dom variables characterized by uniform transition probability distribution by Hoogeboom et al. [79]. In the general framework, given a scalar discrete K-categories random variable Xtaking values xt∈1, . . . , K fort∈[0, T], the forward transition probabilities from the category ito the category j at time tcan be realized by matrices: [Qt]i,j=q(xt=j|xt−1=i). (16) Denoting the one-hot version of xwith the row vector x, i.e. a vector whose elements for the cat- egory karexk= 1 and xj= 0 for j̸=k, forward transition kernel can be written as: q(xt|xt−1) = Cat( xt;p=xt−1Qt), (17) where Cat( x;p) is the categorical distribution over the one-hot row vector xwith probability p. Starting from an initial data x0, the data xtafter ttime steps can be sampled from the transition kernel: xt∼q(xt|x0), (18)q(xt|x0) = Cat( xt;p=x0¯Qt), (19) where ¯Qt:=Q1Q2. . .Qtis the cumulative prod- uct of the transition matrices. The rows of the matrix ¯Qtmust satisfy two constraints: i) must sum to one to conserve the probability mass, and ii) must converge to a known stationary distri- bution in the limit t→ ∞ . Moreover, it can be shown [12, 79] that a closed-form of the categorical posterior q(xt−1|xt,x0) can be computed: q(xt−1|xt,x0) =q(xt|xt−1,x0)q(xt−1|x0) q(xt|x0),(20) where q(xt|xt−1,x0) = q(xt|xt−1) due to the Markov property and all the terms can be cal- culated from Eqs. (17) and (19). The denoising process is implemented via an ANN predicting the logits of the parameterized distribution: pθ(xt−1|xt), (21) which has the functional form of a categorical dis- tribution [12, 79]. The optimization is realized by minimize the loss function: L=DKL(q(xt−1|xt,x0)||pθ(xt−1|xt)).(22) 4.3 Classical Random Walk on Graph A graph is a pair G= (V, E), where Vis a finite and non-empty set whose elements vi,i= 1, . . .|V|, are called vertices (or nodes), and E is a non-empty set of unordered pairs of vertices eij={vi, vj}, called edges (or links). The order and the sizeof a graph are the cardinality |V|and |E|of the sets VandE, respectively.
https://arxiv.org/abs/2505.22193v1
The num- berdvof edges connected to the vertex vis called degree of the vertex. A graph is called undirected if the edges of the graph do not have a direction, or directed otherwise. A graph is completely defined by its adjacency matrix Athat contains infor- mation on the topology of the graph, and whose element are defined as follow: [A]i,j=( 1,ifeij∈E. 0,ifeij̸∈E.(23) In a discrete-time CRW on an undirected graph G, at each step t∈Na walker jumps 10 between two connected nodes vi, vjwith some probability that is described by the stochastic transition matrix S= [S]i,j[36, 91, 92]. Sis related to the adjacency matrix by the equation: [S]i,j= [A]i,j/di. Formally, a walker is represented by a discrete random variable Xand its trajec- tory after the Ttime steps is the set {x1, . . . , x T}, where the value xt, with t= 1, . . . , T , corresponds to the node occupied by the walker at time t. A CRW is a Markov process, that is, the distribu- tion at time t+1 depends only on the distribution at time t. Given pt={p(t) i}the occupation prob- ability distribution of a walker over the nodes vi after ttime steps, the evolution of the distribution at time t+ 1 is given by: pt+1=Spt. (24) The distribution of a CRW converges to a limiting stationary solution: π=( di 2|V|) , t→ ∞ , (25) independently of the initial distribution p0. For a d−regular graph (all vertices have the same degree d) the limiting stationary distribution is uniform over the nodes of the graph. 4.4 Quantum Walks In quantum information theory, Quantum Walks were introduced by Aharonov et al. [32] as quan- tum analogues of classical random walks. How- ever, in CRWs the dynamics is purely stochastic at each time step, while QWs evolve via a determinis- tic dynamics, and the stochasticity comes out only when a measurement is performed on the quan- tum state of the walker [32, 93]. Moreover, QWs involve peculiar quantum mechanical properties such as coherent superposition ,entanglement , and quantum interference , resulting in a faster spread (ballistic for the quantum case, while diffusive for the classical one). There exist two different for- mulations of QWs: i) Continuous-time Quantum Walks (CTQWs) and ii) Discrete-time Quantum Walks (DTQWs). In the former, the unitary evo- lution operator can be applied at any time t, while in the latter, the operator can be applied only in discrete time steps. Furthermore, DTQWs need an extra degree of freedom, called “coin”, whichstores directions and speeds up the dynamics of the walker [34]. 4.4.1 Discrete-Time Quantum Walks on Graph Given a graph G(V, E), let HVbe the Hilbert space spanned by the position states {|v⟩:v= 1, . . . ,|V|}, and let HAbe an auxiliary Hilbert space spanned by the coin states {|↓⟩,|↑⟩}. The total Hilbert space Hassociated to a QW is obtained by the tensor product between the aux- iliary space and the position space: H=HA⊗ H V. (26) In general, a state is written as: |ψ⟩=|a⟩ ⊗ |v⟩,|a⟩ ∈ H A|v⟩ ∈ H
https://arxiv.org/abs/2505.22193v1
V.(27) The dynamics of the quantum walker is governed by the unitary single time-step operator ˆUacting on the total Hilbert space: ˆU=ˆS·(ˆC⊗ˆI), (28) where ˆIis the identity on position space, ˆCis the coin operator acting on the auxiliary space, and ˆS is shift operator acting only on position space and moving the walker from state |v⟩to state |v+ 1⟩, or|v−1⟩. Formally, the shift operator is given by: ˆS=X v|↑⟩⟨↑|⊗| v+ 1⟩⟨v|+|↓⟩⟨↓|⊗| v−1⟩⟨v|.(29) The coin operator ˆCcan be chosen in the fam- ily of unitary operators and its choice leads to symmetric (unbiased walk) or asymmetric (biased walk) distributions. A common choices for ˆCis the Hadamard coin: ˆH=1 2 1 1 1−1 , (30) which leads to an unbiased walk. The state of a quantum walk after tdiscrete time steps is given by: |ψ(t)⟩=ˆU⊗t|ψ(0)⟩. (31) In general, QWs do not converge to any limit- ing distribution in contrast to the classical ones: the limit lim t→∞|ψ(t)⟩does not exist due to 11 the unitary evolution [94]. Moreover, the inter- ference effects lead a quantum walker to spread quadratically faster with respect to its classical counterpart. Namely, in the classical case after t time steps the expected distance from the origin is of order σ∼√ t, while in the quantum case it is of order σ∼t, as shown in Fig. 9 000.050.1σ∼t σ∼√ t PositionProbability Fig. 9 : Comparison between the Gaussian prob- ability distribution (violet squares) obtained from a CRW and probability distribution of a QW with Hadamard coin (red circles). 4.5 Numerical and Real Implementation The implementation is carried out in Python 3 using Qiskit [95], an IBM open-source software to work with real quantum processors at the circuit level, QuTiP [96], an open-source computational physics software to simulate open quantum sys- tems dynamics, and PyTorch [97], which is a flexi- ble and efficient machine and deep learning library. For the implementations of DMs via QSWs on a cycle graph, we used the QuTip function mesolve to compute the evolution of the state ρtof the quantum stochastic walker in Eq. (4) for different values of ωand fixing δt= 6·10−1. Regard- ing the forward process with QWs in Section 2.3, we use the Qiskit library to implement the QWs dynamics both simulated and on real IBM QPU. The backward process is implemented via MLPs of PyTorch linear layers with Rectified Linear Unit (ReLU) activation functions. More precisely, thearchitecture is structured with a head layer of size 800 shared between all time steps tand two tail layers of the same size specialized for each step. The input layer takes the one-hot encoding of all the positions of the walkers at the time step t for all the image pixels, and the final layer pre- dicts their logits at the previous time t−1. In optimization, the categorical cross-entropy loss is minimized for 104epochs using Adam [98] with a batch size of 16 samples and setting the learning rate equal to 10−3. Acknowledgements M.P. and S.M. acknowledge financial support from the PNRR MUR project PE0000023-NQSTI. F.C. also acknowledges financial support from the MUR Progetti di Ricerca di
https://arxiv.org/abs/2505.22193v1
Rilevante Inter- esse Nazionale (PRIN) Bando 2022 - project n. 20227HSE83 – ThAI-MIA funded by the Euro- pean Union-Next Generation EU. Author Contributions M.P. and S.M. performed the implementation and experiments. M.P., S.M., F.A.V. and F.C. dis- cussed and analyzed the results. M.P., S.M., F.C. conceived the methodology, while F.C. pro- posed and supervised the project. M.P., S.M. and F.A.V. wrote the original Draft. M.P., S.M., F.A.V. and F.C. performed the review and Edit- ing. Competing Interests The authors declare no competing interests. References [1] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adver- sarial networks. Communications of the ACM 63(11), 139–144 (2020) [2] Kingma, D.P., Welling, M.: An introduction to variational autoencoders. Foundations and Trends®in Machine Learning 12(4), 307– 392 (2019) 12 [3] Rezende, D., Mohamed, S.: Variational infer- ence with normalizing flows. In: Bach, F., Blei, D. (eds.) Proceedings of the 32nd Inter- national Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 37, pp. 1530–1538. PMLR, Lille, France (2015) [4] Sohl-Dickstein, J., Weiss, E., Mah- eswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. In: Bach, F., Blei, D. (eds.) Proceedings of the 32nd International Con- ference on Machine Learning. Proceedings of Machine Learning Research, vol. 37, pp. 2256–2265. PMLR, Lille, France (2015) [5] Ho, J., Jain, A., Abbeel, P.: Denoising diffu- sion probabilistic models. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M.F., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 6840–6851 (2020) [6] Paiano, M., Martina, S., Giannelli, C., Caruso, F.: Transfer learning with gener- ative models for object detection on lim- ited datasets. Machine Learning: Science and Technology 5(3), 035041 (2024) [7] Dhariwal, P., Nichol, A.: Diffusion models beat gans on image synthesis. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neu- ral Information Processing Systems, vol. 34, pp. 8780–8794 (2021) [8] Kong, Z., Ping, W., Huang, J., Zhao, K., Catanzaro, B.: Diffwave: A versatile diffu- sion model for audio synthesis. In: Interna- tional Conference on Learning Representa- tions (2021) [9] Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E.L., Ghasemipour, K., Gontijo Lopes, R., Karagol Ayan, B., Sali- mans, T., et al. : Photorealistic text-to-image diffusion models with deep language under- standing. Advances in Neural Information Processing Systems 35, 36479–36494 (2022) [10] Rombach, R., Blattmann, A., Lorenz, D.,Esser, P., Ommer, B.: High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10684–10695 (2022) [11] Lugmayr, A., Danelljan, M., Romero, A., Yu, F., Timofte, R., Van Gool, L.: Repaint: Inpainting using denoising diffusion prob- abilistic models. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11461– 11471 (2022) [12] Austin, J., Johnson, D.D., Ho, J., Tarlow, D., Berg, R.: Structured denoising diffusion mod- els in discrete state-spaces. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 17981–17993. Curran Associates, Inc. (2021) [13] Tashiro, Y.,
https://arxiv.org/abs/2505.22193v1
Song, J., Song, Y., Ermon, S.: Csdi: Conditional score-based diffusion models for probabilistic time series impu- tation. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. (eds.) Advances in Neural Information Pro- cessing Systems, vol. 34, pp. 24804–24816. Curran Associates, Inc. (2021) [14] Stable Diffusion. https://stability.ai/ stable-image [15] DALL- E4. https://dalle4ai.com [16] Shor, P.W.: Algorithms for quantum com- putation: discrete logarithms and factoring. In: Proceedings 35th Annual Symposium on Foundations of Computer Science, pp. 124– 134. IEEE Computer Society, USA (1994) [17] Shor, P.W.: Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM J. Comput. 26(5), 1484–1509 (1997) [18] Grover, L.K.: A fast quantum mechanical algorithm for database search. In: Proceed- ings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing. STOC ’96, pp. 212–219. Association for Computing 13 Machinery, New York, NY, USA (1996) [19] Georgescu, I.M., Ashhab, S., Nori, F.: Quan- tum simulation. Rev. Mod. Phys. 86, 153–185 (2014) [20] O’Malley, P.J.J., Babbush, R., Kivlichan, I.D., Romero, J., McClean, J.R., Barends, R., Kelly, J., Roushan, P., Tranter, A., Ding, N., Campbell, B., Chen, Y., Chen, Z., Chiaro, B., Dunsworth, A., Fowler, A.G., Jeffrey, E., Lucero, E., Megrant, A., Mutus, J.Y., Nee- ley, M., Neill, C., Quintana, C., Sank, D., Vainsencher, A., Wenner, J., White, T.C., Coveney, P.V., Love, P.J., Neven, H., Aspuru- Guzik, A., Martinis, J.M.: Scalable quantum simulation of molecular energies. Phys. Rev. X6, 031007 (2016) [21] Babbush, R., Wiebe, N., McClean, J., McClain, J., Neven, H., Chan, G.K.-L.: Low- depth quantum simulation of materials. Phys. Rev. X 8, 011044 (2018) [22] Harrow, A.W., Hassidim, A., Lloyd, S.: Quantum algorithm for linear systems of equations. Phys. Rev. Lett. 103, 150502 (2009) [23] Crosson, E., Harrow, A.W.: Simulated quan- tum annealing can be exponentially faster than classical simulated annealing. In: 2016 IEEE 57th Annual Symposium on Foun- dations of Computer Science (FOCS), pp. 714–723 (2016) [24] Farhi, E., Harrow, A.W.: Quantum Supremacy through the Quantum Approximate Optimization Algorithm (2019) [25] Preskill, J.: Fault-tolerant quantum compu- tation. In: Introduction to Quantum Compu- tation and Information, pp. 213–269. World Scientific (1998) [26] Preskill, J.: Quantum Computing in the NISQ era and beyond. Quantum 2, 79 (2018) [27] IBM Quantum Prcessing Unit. https://www. ibm.com/think/topics/qpu[28] QuEra. https://www.quera.com/glossary/ processing-unit [29] Knill, E., Laflamme, R.: Theory of quantum error-correcting codes. Phys. Rev. A 55, 900– 911 (1997) [30] Campbell, E.: A series of fast-paced advances in quantum error correction. Nature Reviews Physics 6(3), 160–161 (2024) [31] AI, G.Q., Collaborators: Quantum error cor- rection below the surface code threshold. Nature (2024) [32] Aharonov, Y., Davidovich, L., Zagury, N.: Quantum random walks. Phys. Rev. A 48, 1687–1690 (1993) [33] Childs, A.M., Goldstone, J.: Spatial search by quantum walk. Phys. Rev. A 70, 022314 (2004) [34] Ambainis, A., Kempe, J., Rivosh, A.: Coins make quantum walks faster. In: Proceedings of the Sixteenth Annual ACM-SIAM Sympo- sium on Discrete Algorithms. SODA ’05, pp. 1099–1108. Society for Industrial and Applied Mathematics, USA (2005) [35] Magniez, F., Nayak, A., Roland, J., Santha, M.: Search via quantum walk. SIAM Journal on Computing 40(1), 142–164 (2011) [36]
https://arxiv.org/abs/2505.22193v1
Caruso, F.: Universally optimal noisy quan- tum walks on complex networks. New Journal of Physics 16(5), 055015 (2014) [37] Caruso, F., Crespi, A., Ciriolo, A.G., Scia- rrino, F., Osellame, R.: Fast escape of a quantum walker from an integrated photonic maze. Nature Communications 7(1), 11682 (2016) [38] Dalla Pozza, N., Buffoni, L., Martina, S., Caruso, F.: Quantum reinforcement learn- ing: the maze problem. Quantum Machine Intelligence 4(1), 11 (2022) [39] Abd El-Latif, A.A., Abd-El-Atty, B., Amin, M., Iliyasu, A.M.: Quantum-inspired cas- caded discrete-time quantum walks with 14 induced chaotic dynamics and cryptographic applications. Scientific Reports 10(1), 1930 (2020) [40] Zeuner, J., Pitsios, I., Tan, S.-H., Sharma, A.N., Fitzsimons, J.F., Osellame, R., Walther, P.: Experimental quantum homomorphic encryption. npj Quantum Information 7(1), 25 (2021) [41] Kasture, S., Acheche, S., Henriet, L., Henry, L.-P.: Multiparticle quantum walks for distin- guishing hard graphs (2025) [42] Lovett, N.B., Cooper, S., Everitt, M., Tre- vers, M., Kendon, V.: Universal quantum computation using the discrete-time quan- tum walk. Phys. Rev. A 81, 042330 (2010) [43] Singh, S., Chawla, P., Sarkar, A., Chan- drashekar, C.M.: Universal quantum comput- ing using single-particle discrete-time quan- tum walk. Scientific Reports 11(1), 11551 (2021) [44] Chawla, P., Singh, S., Agarwal, A., Srini- vasan, S., Chandrashekar, C.M.: Multi- qubit quantum computing using discrete- time quantum walks on closed graphs. Scien- tific Reports 13(1), 12078 (2023) [45] D¨ ur, W., Raussendorf, R., Kendon, V.M., Briegel, H.-J.: Quantum walks in optical lat- tices. Phys. Rev. A 66, 052319 (2002) [46] Schreiber, A., G´ abris, A., Rohde, P.P., Laiho, K.,ˇStefaˇ n´ ak, M., Potoˇ cek, V., Hamilton, C., Jex, I., Silberhorn, C.: A 2d quantum walk simulation of two-particle dynamics. Science 336(6077), 55–58 (2012) [47] Goyal, S.K., Roux, F.S., Forbes, A., Kon- rad, T.: Implementing quantum walks using orbital angular momentum of classical light. Phys. Rev. Lett. 110, 263602 (2013) [48] Lahini, Y., Steinbrecher, G.R., Bookatz, A.D., Englund, D.: Quantum logic using cor- related one-dimensional quantum walks. npj Quantum Information 4(1), 2 (2018) [49] Acasiete, F., Agostini, F.P., Moqadam, J.K.,Portugal, R.: Implementation of quantum walks on ibm quantum computers. Quantum Information Processing 19(12), 426 (2020) [50] Razzoli, L., Cenedese, G., Bondani, M., Benenti, G.: Efficient implementation of discrete-time quantum walks on quantum computers. Entropy 26(4) (2024) [51] Whitfield, J.D., Rodr´ ıguez-Rosario, C.A., Aspuru-Guzik, A.: Quantum stochastic walks: A generalization of classical random walks and quantum walks. Physical Review A—Atomic, Molecular, and Optical Physics 81(2), 022323 (2010) [52] Wittek, P.: Quantum machine learning: what quantum computing means to data mining. Academic Press (2014) [53] Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., Lloyd, S.: Quan- tum machine learning. Nature 549(7671), 195–202 (2017) [54] Schuld, M., Petruccione, F.: Machine Learn- ing with Quantum Computers. Springer, (2021) [55] McClean, J.R., Romero, J., Babbush, R., Aspuru-Guzik, A.: The theory of variational hybrid quantum-classical algorithms. New Journal of Physics 18(2), 023023 (2016) [56] Bharti, K., Cervera-Lierta, A., Kyaw, T.H., Haug, T., Alperin-Lea, S., Anand, A., Deg- roote, M., Heimonen, H., Kottmann, J.S., Menke, T., Mok, W.-K., Sim, S., Kwek, L.-C., Aspuru-Guzik, A.: Noisy intermediate-scale quantum algorithms. Rev. Mod. Phys. 94,
https://arxiv.org/abs/2505.22193v1
015004 (2022) [57] Das, S., Zhang, J., Martina, S., Suter, D., Caruso, F.: Quantum pattern recognition on real quantum processing units. Quantum Machine Intelligence 5(1), 16 (2023) [58] Geng, A., Moghiseh, A., Redenbach, C., Schladitz, K.: A hybrid quantum image edge detector for the nisq era. Quantum Machine Intelligence 4(2), 15 (2022) 15 [59] Peruzzo, A., McClean, J., Shadbolt, P., Yung, M.-H., Zhou, X.-Q., Love, P.J., Aspuru- Guzik, A., O’Brien, J.L.: A variational eigen- value solver on a photonic quantum pro- cessor. Nature Communications 5(1), 4213 (2014) [60] Kandala, A., Mezzacapo, A., Temme, K., Takita, M., Brink, M., Chow, J.M., Gam- betta, J.M.: Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature 549(7671), 242– 246 (2017) [61] Gonz´ alez-Garc´ ıa, G., Trivedi, R., Cirac, J.I.: Error propagation in nisq devices for solving classical optimization problems. PRX Quan- tum3, 040326 (2022) [62] Zhang, K., Rao, P., Yu, K., Lim, H., Kore- pin, V.: Implementation of efficient quan- tum search algorithms on nisq computers. Quantum Information Processing 20(7), 233 (2021) [63] Farhi, E., Neven, H.: Classification with Quantum Neural Networks on Near Term Processors (2018) [64] Lloyd, S., Weedbrook, C.: Quantum gener- ative adversarial learning. Phys. Rev. Lett. 121, 040502 (2018) [65] Dallaire-Demers, P.-L., Killoran, N.: Quan- tum generative adversarial networks. Phys. Rev. A 98, 012324 (2018) [66] Khoshaman, A., Vinci, W., Denis, B., Andriyash, E., Sadeghi, H., Amin, M.H.: Quantum variational autoencoder. Quantum Science and Technology 4(1), 014001 (2018) [67] Parigi, M., Martina, S., Caruso, F.: Quantum-noise-driven generative diffusion models. Advanced Quantum Technologies, 2300401 (2024) [68] Ma, H., Ye, L., Guo, X., Ruan, F., Zhao, Z., Li, M., Wang, Y., Yang, J.: Quantum genera- tive adversarial networks in a silicon photonic chip with maximum expressibility. Advanced Quantum Technologies, 2400171 (2024)[69] Hibat-Allah, M., Mauri, M., Carrasquilla, J., Perdomo-Ortiz, A.: A framework for demon- strating practical quantum advantage: com- paring quantum against classical generative models. Communications Physics 7(1), 68 (2024) [70] K¨ olle, M., Stenzel, G., Stein, J., Zielinski, S., Ommer, B., Linnhoff-Popien, C.: Quantum Denoising Diffusion Models (2024) [71] DeFalco, F., Ceschini, A., Sebastianelli, A., LeSaux, B., Panella, M.: Quantum latent dif- fusion models. Quantum Machine Intelligence 6(2), 85 (2024) [72] Cacioppo, A., Colantonio, L., Bordoni, S., Giagu, S.: Quantum diffusion models for quantum data learning in high-energy physics. QTML 2024 Conference (2024) [73] De Falco, F., Ceschini, A., Sebastianelli, A., Le Saux, B., Panella, M.: Quantum hybrid diffusion models for image synthesis. KI - K¨ unstliche Intelligenz (2024) [74] Kivijervi, N.T.: Quantum diffusion model. Master’s thesis, University of Oslo (2024) [75] Zhang, B., Xu, P., Chen, X., Zhuang, Q.: Generative quantum machine learning via denoising diffusion probabilistic models. Phys. Rev. Lett. 132, 100602 (2024) [76] Chen, C., Zhao, Q., Zhou, M., He, Z., Sun, Z., Situ, H.: Quantum Generative Diffusion Model: A Fully Quantum-Mechanical Model for Generating Quantum State Ensemble (2024) [77] Cacioppo, A., Colantonio, L., Bordoni, S., Giagu, S.: Quantum Diffusion Models (2023) [78] Kwun, G., Zhang, B., Zhuang, Q.: Mixed- State Quantum Denoising Diffusion Proba- bilistic Model (2024) [79] Hoogeboom, E., Nielsen, D., Jaini, P., Forr´ e, P., Welling,
https://arxiv.org/abs/2505.22193v1
M.: Argmax flows and multi- nomial diffusion: Learning categorical distri- butions. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P.S., Vaughan, J.W. 16 (eds.) Advances in Neural Information Pro- cessing Systems, vol. 34, pp. 12454–12465. Curran Associates, Inc. (2021) [80] Kossakowski, A.: On quantum statisti- cal mechanics of non-hamiltonian systems. Reports on Mathematical Physics 3(4), 247– 274 (1972) [81] Lindblad, G.: On the generators of quantum dynamical semigroups. Communications in Mathematical Physics 48(2), 119–130 (1976) [82] Gorini, V., Kossakowski, A., Sudarshan, E.C.G.: Completely positive dynamical semi- groups of n-level systems. Journal of Mathe- matical Physics 17(5), 821–825 (1976) [83] LeCun, Y., Cortes, C., Burges, C.: Mnist handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist 2(2010) [84] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception archi- tecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818– 2826 (2016) [85] Krzywinski, M., Altman, N.: Visualizing sam- ples with box plots. Nature Methods 11(2), 119–120 (2014) [86] Nichol, A.Q., Dhariwal, P.: Improved denois- ing diffusion probabilistic models. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Vir- tual Event. Proceedings of Machine Learning Research, vol. 139, pp. 8162–8171. PMLR (2021) [87] Javadi-Abhari, A., Treinish, M., Krsulich, K., Wood, C.J., Lishman, J., Gacon, J., Martiel, S., Nation, P.D., Bishop, L.S., Cross, A.W., Johnson, B.R., Gambetta, J.M.: Quantum computing with Qiskit (2024) [88] IBM Quantum Computing. https://www. ibm.com/quantum[89] Ronneberger, O., Fischer, P., Brox, T.: U- net: Convolutional networks for biomedical image segmentation. In: Navab, N., Horneg- ger, J., Wells, W.M., Frangi, A.F. (eds.) Medical Image Computing and Computer- Assisted Intervention – MICCAI 2015, pp. 234–241. Springer, Cham (2015) [90] Kingma, D.P., Welling, M.: Auto-Encoding Variational Bayes. In: 2nd International Con- ference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings (2014) [91] Weiss, G.H., Rubin, R.J.: Random Walks: Theory and Selected Applications, pp. 363– 505. John Wiley & Sons, Ltd (1982) [92] Weiss, G.H.: Aspects and Applications of the Random Walk International Congress Series Random Materials and Processes, ISSN 0925- 5850. North-Holland (1994) [93] Kempe, J.: Quantum random walks: an intro- ductory overview. Contemporary Physics 50(1), 339–359 (2009) [94] Aharonov, D., Ambainis, A., Kempe, J., Vazi- rani, U.: Quantum walks on graphs. In: Pro- ceedings of the Thirty-Third Annual ACM Symposium on Theory of Computing. STOC ’01, pp. 50–59. Association for Computing Machinery, New York, NY, USA (2001) [95] IBM Qiskit. https://www.ibm.com/ quantum/qiskit [96] QuTip. https://qutip.org [97] PyTorch. https://pytorch.org [98] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. e-print arXiv:1412.6980 (2017) 17 Supplementary Material In Fig. S1, we show forward and backward processes for image generation implemented by the simulator fake brisbane for the digit 0 of the MNIST dataset. In Fig. S2, we illus- trate the generated images for the digits 1 ,2, . . . , 9 of MNIST obtained with a hybrid Quantum Stochastic Walk (QSW) dynamics ( ω= 0.3), the simulator fake brisbane , and the real IBM quantum hardware ibm
https://arxiv.org/abs/2505.22193v1
brisbane . t=10 0 5KL=0.087 0 5KL=1.075t=15 0 5KL=0.008 0 5KL=1.343t=T=20 0 5KL=0.000 0 5KL=1.229t=0 0 50.000.250.500.751.00KL=1.320 0 50.000.250.500.751.00KL=0.016t=1 0 5KL=0.494 0 5KL=1.635t=2 0 5KL=0.351 0 5KL=0.541t=3 0 5KL=0.273 0 5KL=2.085t=4 0 5KL=0.277 0 5KL=1.375t=5 0 5KL=0.257 0 5KL=2.124 Forward process Generation processFID=241 Fig. S1 : Images generation with Quantum Walk (QW)-based Diffusion Model (DM) with noise from IBM simulator fake brisbane . We report (for selected values of t) the evolution of 9 random samples of the real dataset in forward (first row) and 9 random generated samples in backward (third row). We also show the evolution of the pixel values distributions during forward (second row) reporting the Kullback–Leibler (KL) divergence to compare the full training dataset (blue) with the final uniform distribution (orange), and the evolution of the backward (fourth row) for an equally- sized generated dataset comparing it with the original training dataset. Finally, we calculate the FID between the original and generated dataset for t= 0. 1 Original MNIST FID=93 FID=210 FID=141 FID=162 FID=154 FID=100 FID=229 FID=145 FID=116Generation with QSW (ω=0.3) FID=242 FID=287 FID=255 FID=274 FID=239 FID=250 FID=277 FID=222 FID=231Generation with QW on fake_brisbane FID=335 FID=334 FID=300 FID=333 FID=335 FID=331 FID=322 FID=271 FID=298Generation with QW on ibm_brisbane Fig. S2 : Image generation with a hybrid QSW dynamics and a QW-based DM with noise from a simulated and a real Noisy Intermediate-Scale Quantum (NISQ) device for MNIST digit from 1 to 9 (the results for the digit 0 are already included in the main manuscript and in Fig. S1 above). We illustrate nine different samples for each handwritten digit, while in the first row the samples are taken from the original MNIST dataset. In the second row, the samples are generated via the implementation of the forward with a simulated QSW dynamics with ω= 0.3. In the third and the fourth rows, the samples are generated using a forward implemented with QWs and executed respectively on the simulator fake brisbane and the real quantum machine ibm brisbane . We also report the FID metric values between the original full training set and a same-sized generated dataset. 2
https://arxiv.org/abs/2505.22193v1
Published as a conference paper at ICLR 2025 ENHANCING UNCERTAINTY ESTIMATION AND INTER - PRETABILITY VIA BAYESIAN NON-NEGATIVE DECI- SION LAYER Xinyue Hu*, 1, Zhibin Duan*, 2, Bo Chen†,1, Mingyuan Zhou3 1National Key Laboratory of Radar Signal Processing, Xidian University, Xi’an, 710071, China. 2School of Mathematics and Statistics, Xi’an Jiaotong University, Xi’an, Shaanxi, China. 3McCombs School of Business, The University of Texas at Austin, Austin, TX 78712 xinyuehu122@gmail.com zbduan@xjtu.edu.cn bchen@mail.xidian.edu.cn mingyuan.zhou@mccombs.utexas.edu ABSTRACT Although deep neural networks have demonstrated significant success due to their powerful expressiveness, most models struggle to meet practical requirements for uncertainty estimation. Concurrently, the entangled nature of deep neural net- works leads to a multifaceted problem, where various localized explanation tech- niques reveal that multiple unrelated features influence the decisions, thereby un- dermining interpretability. To address these challenges, we develop a Bayesian Non-negative Decision Layer ( BNDL ), which reformulates deep neural networks as a conditional Bayesian non-negative factor analysis. By leveraging stochastic latent variables, the BNDL can model complex dependencies and provide robust uncertainty estimation. Moreover, the sparsity and non-negativity of the latent variables encourage the model to learn disentangled representations and decision layers, thereby improving interpretability. We also offer theoretical guarantees that BNDL can achieve effective disentangled learning. In addition, we developed a corresponding variational inference method utilizing a Weibull variational in- ference network to approximate the posterior distribution of the latent variables. Our experimental results demonstrate that with enhanced disentanglement capa- bilities, BNDL not only improves the model’s accuracy but also provides reliable uncertainty estimation and improved interpretability. 1 I NTRODUCTION Over the last decade, deep neural networks (DNNs) have achieved significant success and have been widely applied across various research domains (LeCun et al., 2015). As these applications expand, quantifying uncertainty in predictions has gained importance, especially for AI safety (Amodei et al., 2016). A key goal of uncertainty estimation is to ensure that neural networks assign low confidence to test cases poorly represented by training data or prior knowledge (Gal & Ghahramani, 2016). One approach to this challenge is Bayesian neural networks, which treat model parameters as ran- dom variables. Although progress has been made in developing approximate inference methods for Bayesian neural networks (Li et al., 2016; Louizos & Welling, 2017; Shi et al., 2017), computa- tional scalability continues to pose a significant obstacle. Alternatively, deterministic methods such as deep ensembles (Lakshminarayanan et al., 2017) and dropout (Gal & Ghahramani, 2016) have been proposed. However, these approaches necessitate running full DNNs multiple times during the testing phase, resulting in high computational costs and increased inference time (Fan et al., 2021). In addition to uncertainty estimation, there is growing demand for interpretability in DNNs, aimed at helping users understand model decisions through various tools. Advanced techniques have been developed to provide localized explanations by identifying key features or regions influencing out- *Equal contribution. †Corresponding author. 1arXiv:2505.22199v1 [cs.LG] 28 May 2025 Published as a conference paper at ICLR 2025 puts (Olah et al., 2017; Yosinski et al., 2015). However, a key challenge in this field is that neurons within well-trained DNNs tend to be multifaceted
https://arxiv.org/abs/2505.22199v1
(Nguyen et al., 2016), meaning they respond to multiple, unrelated features. This phenomenon may arise from the entangled nature of DNNs, wherein multiple features are utilized for various tasks. To address this challenge, significant efforts have been made in the literature, including the use of specialized regularizers aimed at promoting feature disentanglement (Nguyen et al., 2016), and the employment of sparse linear decision layers to select the most important features (Wong et al., 2021). While these approaches have led to no- table advancements, they warrant further exploration from both practical and theoretical standpoints. Specifically, the method of employing a sparse linear decision layer may impair task performance due to the loss of essential dimensional information. Furthermore, while empirical results suggest that sparsity enhances model interpretability and mitigates the challenges posed by multifaceted neurons (Moakhar et al., 2023), these claims lack rigorous theoretical support. While deep neural networks (DNNs) face inherent limitations, non-negative factor analysis (NFA)—notably Poisson variants (Zhou et al., 2012)—exhibits strengths in sparse concept dis- entanglement and uncertainty-aware modeling through stochastic latent variables (Lee & Seung, 1999). Thus, a compelling motivation exists to transfer the good properties of NFA to DNN. How- ever, their paradigm divergence poses challenges: NFA relies on shallow linear architectures, while DNNs employ deep non-linear hierarchies. A key insight lies in decomposing DNNs into non-linear feature extractors and linear decision layers, where recent studies show the latter is sufficient for capturing uncertainty (Kristiadi et al., 2020; Dhuliawala et al., 2023b; Joo et al., 2020; Parekh et al., 2022). Further, Wong et al. (2021) demonstrates that leveraging a sparse decision layer can en- hance a model’s interpretability through innovative techniques. This line of inquiry motivates the use of NFA as a linear decision layer, integrated with DNNs as feature extractors, to enhance both uncertainty estimation and model’s interpretability. With the considerations above, we developed a Bayesian Non-negative Decision Layer ( BNDL ), designed to empower DNNs with enhanced interpretability and uncertainty estimation capabilities. Specifically, under the categorical likelihood, the label is factorized into a gamma-distributed factor score matrix (local latent variables) and a corresponding gamma-distributed factor loading matrix (global latent variables). The former represents the latent representation of the observation, while the latter captures the interaction between the latent variables and the label. Given the challenge of intractable posterior distributions for the latent variables, we introduce a deep Weibull varia- tional neural network to effectively approximate the gamma-distributed latent variables (Zhang et al., 2018). All parameters are trained using stochastic gradient descent (SGD) within a variational infer- ence framework. Furthermore, we provide theoretical guarantees for the model’s disentanglement capabilities, which enhances its interpretability. Additionally, our complexity analysis indicates that the increase in computational effort is minimal during both the training and uncertainty testing phases. To assess the efficacy of the proposed model, we conducted evaluations on a wide range of benchmark datasets using image classification tasks. The experimental results demonstrate that the proposed approach consistently outperforms standard classification models and offers superior uncertainty estimation. The main contributions of the paper can be summarized as follows: • We develop
https://arxiv.org/abs/2505.22199v1
a flexible Bayesian Non-negative Decision Layer ( BNDL ) for deep neural networks, empower its interpretability and uncertainty estimation capabilities. • The complexity analysis shows that the computational overhead introduced by BNDL is minimal compared to DNNs. Further, we take theory analysis to verify its disentangled properties. • We assessed the effectiveness of BNDL across multiple datasets, including CIFAR-10, CIFAR-100, and ImageNet-1k. BNDL not only preserves or even enhances baseline per- formance but also facilitates uncertainty estimation and improves the interpretability of neurons. 2 R ELATED WORK 2.1 U NCERTAINTY ESTIMATION Existing research in supervised learning has focused on modeling conditional distributions beyond the mean, particularly for predictive uncertainty. Ensemble methods (Liu et al., 2021; Lakshmi- 2 Published as a conference paper at ICLR 2025 X Y X q X q Y()|pxq()|pyq()|pyx𝑞(𝜃|𝑥) Φ X q Y()|pxq𝑝(𝑦|𝜃,Φ)(a) DNNs(c) Generative Model of BNDL(d) Inference Model of BNDL(b) Extend DNNs with 𝜃 Figure 1: Illustration of the graphical models. 1(a): the predictive process of output Yfor the baseline Deep Neural Network; 1(b): the generative model of DNNs with introducing stochastic latent variable θ;1(c): the generation model of the Bayesian non-negative decision layer; and 1(d): corresponding approximate inference for latent variables θ. narayanan et al., 2017) combine neural networks with stochastic outputs to quantify uncertainty, while Bayesian neural networks (BNNs) use distributions over network parameters to reflect model plausibility (Blundell et al., 2015; Hern ´andez-Lobato & Adams, 2015; Kingma et al., 2015; Gal & Ghahramani, 2016; Tomczak et al., 2021). However, BNNs are complex and difficult to train. In contrast, Bayesian Last Layer (BLL) methods, which focus uncertainty only on the output layer, offer a simpler, more efficient alternative (Ober & Rasmussen, 2019; Watson et al., 2021; Daxberger et al., 2021; Kristiadi et al., 2020; Harrison et al., 2024). BLL techniques, such as Ober & Rasmussen (2019)’s noise marginalization and Daxberger et al. (2021)’s use of Laplace Approximations (LA), improve probabilistic predictions. Recent advancements like retraining objectives (Weber et al., 2018) and variational improvements (Harrison et al., 2024; Watson et al., 2021) have expanded BLL’s application in uncertainty estimation tasks. Another relevant approach is evidential deep learning (EDL), which uses higher-order conditional distributions, like the Dirichlet distribution, for uncertainty estimation (Sensoy et al., 2018; Malinin & Gales, 2018). EDL has shown effec- tiveness in tasks such as classification (Sensoy et al., 2018) and regression (Malinin et al., 2020). Furthermore, Amini et al. (2020) proposed training networks to infer hyperparameters of evidential distributions, enabling scalable uncertainty learning. Building on BLL principles, BNDL models the decision layer as a Bayesian generative model, using a gamma prior to enhance feature sparsity and disentanglement, improving both uncertainty estimation and interpretability. 2.2 I NTERPRETABILITY TOOLS FOR DEEP NEURAL NETWORK The goal of neural network interpretability is to identify the mechanisms underlying DNN’s decision-making processes. The related research ranges from approaches which link abstract con- cepts to structural network components, such as specific neurons, for example via visualization (Yosinski et al., 2015; Nguyen et al., 2016), to approaches which aim to trace individual model outputs on a per-sample basis such as local
https://arxiv.org/abs/2505.22199v1
surrogates (Ribeiro et al., 2016) and salience maps (Si- monyan et al., 2013). However, as noted by various recent studies, these local attributions can be easy to fool or may otherwise fail to capture global aspects of model behavior (Adebayo et al., 2018; Leavitt & Morcos, 2020; Wong et al., 2021). A major confounder for interpretability is that neurons in a well-trained DNN are often multifaceted (Nguyen et al., 2016; Moakhar et al., 2023), responding to various, often unrelated, features. In contrast, our approach ensures that the identified high-level concepts— i.e., the deep features utilized by the sparse decision layer—fully determine the model’s behavior. 3 B AYESIAN NON-NEGATIVE DECISION LAYER This section first reformulates the traditional DNNs as a latent variable model (Sec. 3.1) and then provides a detailed description of the proposed Bayesian non-negative decision layer, which consists of the generative model (Sec. 3.2) and the variational inference network (Sec. 3.3), followed by the description of the proposed variational inference algorithm (Sec. 3.4). 3 Published as a conference paper at ICLR 2025 3.1 P RELIMINARIES ON DEEP NEURAL NETWORKS We first adopt a latent variable model to re-examine the DNNs, which are commonly tackled by training domain-specific neural networks with a sigmoid or softmax output layer. Dataset samples x are mapped deterministically by a neural network fto a real vector z=f(x), which is transformed in the softmax layer to a point on the simplex ∆|y|, a discrete distribution over class labels y∈ Y: p(y|x) =exp{zwT y+by}P y′∈Yexp{zwT y′+by′}(1) Where the wyandbyrepresent the weights and biases of the final fully connected layer. Thus, the classification process can be viewed as a generative process for label y, as shown in Fig. 1(a). Previous works (Dhuliawala et al., 2023a; Joo et al., 2020) have shown that a softmax classifier is a special case of Equation 2: p(y|x) =Z zp(y|z)p(z|x) (2) The neural network f, up to the softmax layer, models p(z|x)as a delta distribution δz−f(x), with the softmax input representing a sample from p(z|x), and p(y|z)defined by the softmax layer. While the softmax output can be viewed as a categorical distribution, the limited randomness from the output layer is often insufficient for capturing complex dependencies (Chung et al., 2015). Ad- ditionally, DNNs with a softmax layer face overconfidence issues (Guo et al., 2017; Kristiadi et al., 2020; Liu et al., 2020), complicating uncertainty estimation. Furthermore, the data feature zand parameter wyare often entangled as dense vectors, leading to multifaceted phenomena in local- ized explanations (Nguyen et al., 2016). These challenges motivate the development of a Bayesian non-negative decision layer. 3.2 B AYESIAN NON-NEGATIVE DECOMPOSITION LAYER Building on the latent variable model of the softmax classification task, we can further reformu- late DNNs as a Non-negative Factor Analysis, referred to as the Bayesian Non-negative Decision Layer (BNDL) . Firstly, to better capture complex dependence and aleatoric uncertainty, referring to the inherent randomness in the data that cannot be explained away, we can intuitively extend the gen- erative model of original DNNs with stochastic latent variables θby modeling latent representation zwith a distribution. Thus, as illustrated
https://arxiv.org/abs/2505.22199v1
in Fig. 1(b), the Eq. 2 is improved to p(y|x) =Z θp(y|θ)p(θ|x) (3) To further account for epistemic uncertainty, which refers to the uncertainty inherent in the model itself, we treat the weights of the final fully connected layer as stochastic latent variables. The generative model is then defined as follows, and its graphical model is shown in Fig. 1(c): p(y|x) =Z θ,Φp(y|θ,Φ)p(θ|x)p(Φ) (4) This formulation bears similarities to factor analysis (Yu et al., 2008), where both θandΦin clas- sical DNNs are commonly sampled from a Gaussian distribution, making the generative model akin to Gaussian factor analysis. However, while Gaussian factor analysis can effectively evaluate uncer- tainty, it struggles to achieve disentangled representation learning with dense latent variables (Moran et al., 2021), which is crucial for model interpretability (Nguyen et al., 2016). Considering the gamma distribution possesses both non-linearity and non-negativity, we use the gamma distribution as the prior distribution for θandΦ. This choice allows for a more comprehen- sive capture of complex relationships within the model (Zhou et al., 2015; Duan et al., 2021; 2024), while also accommodating the characteristics of sparse non-negative variables, thereby enhancing the disentanglement and interpretability of the learned representations θand corresponding decision layersΦ(Lee & Seung, 1999). Specifically, given data samples xjand their corresponding one-hot labelyj∈RC +, where Cis the number of classes, we can factorize yjunder the category likelihood as follows: yj|θj∼Category (θjΦ),θj|xj∼Gamma (fθ(xj),1),Φ∼Gamma (1,1). (5) 4 Published as a conference paper at ICLR 2025 where θj∈RK +is the factor score matrix, each column of which encodes the relative importance of each atom in a sample xj; andΦ∈RK×C + is the factor loading matrix, each column of which is a factor encoding the relative importance of each term. Intuitively, in a classification problem, thek-th column of Φ∈RK×C + , denoted as ϕk∈RC +, represents the k-th distribution across all classes. Although in our formulation the prior of the latent variable θjis modulated by the input xj, the constraint can be easily relaxed to allow the latent variable to be statistically independent of the input variable (Sohn et al., 2015; Kingma et al., 2014). Therefore, we can simply define the data-independent prior of the latent variable θasfθ(xj) = 1 . 3.3 V ARIATIONAL INFERENCE NEURAL NETWORK We have constructed the generative process of y, which includes the parameters θandΦ, in Sec. 3.2. Due to the intractable posterior in BNDL, we build a Weibull variational inference to approximate the posterior of θandΦ. Weibull Approximate Posterior: While the gamma distribution appears suitable for the posterior distribution due to its encouragement of sparsity and adherence to the nonnegative condition, directly reparameterizing the gamma distribution can result in high noise (Zhang et al., 2018; Kingma & Welling, 2014; Knowles, 2015; Ruiz et al., 2016; Naesseth et al., 2017), and using the REINFORCE method for gradient estimation may lead to large variance (Williams, 1992). Hence, we use the reparameterizable Weibull distribution (Zhang et al., 2018) to approximate the posterior for the gamma latent variables, mainly due to the following considerations: i), the Weibull distribution has a simple reparameterization so that it is
https://arxiv.org/abs/2505.22199v1
easier to optimize; ii)the Weibull distribution is similar to a gamma distribution, capable of modeling sparse, skewed and positive distributions. Specifically, the latent variable x∼Weibull (k, λ)can be easily reparameterized as: x=λ(−ln(1−ε))1/k, ε ∼Uniform (0,1). (6) Where λandkare the scale and shape parameter of Weilbull distribution respectively; iii), The KL divergence from the gamma to Weibull distributions has an analytic expression as: KL(Weibull (k, λ)∥Gamma (α, β)) =γα k−αlogλ+ log k+βλΓ 1 +1 k −γ−1−αlogβ+ log Γ( α)(7) where γis the Euler-Mascheroni constant. Local latent variables Inference Network: As shown in Fig. 1(d), the variational inference net- work construct the variational posterior: q(θj|xj) =Weibull (kj,λj) (8) where the inference network can be defined as : kj=Softplus (fk(hj)),λj=ReLu (fλ(hj))/exp(1 + 1 /kj),hj=fNN(xj) (9) where hjis an extracted feature with deep neural networks, such as ResNet, which can be seen as a deep feature extractor; Let f·(·)denotes the neural network, where fNNis the feature extractor, encompassing all layers from the input layer to the penultimate layer, and fλandfkare the network to infer the scale and shape parameters of Weibull distribution, respectively. The Softplus function, defined as log (1 + exp(·)), is applied element-wise non-linearity to ensure positive Weibull shape parameters. The Weibull distribution is used to approximate the gamma-distributed conditional pos- terior, and its parameters k(l) j∈RKl +andλ(l) j∈RKl +are inferred by the bottom-up data information using the neural networks. Global latent variables inference Network: For the same reason, we also use Weibull distributions to approximate the posteriors of global latent variables Φ∈RK×C + , formulated as q(Φ|−) =Weibull (kΦ,λΦ) (10) where the inference network can be expressed as: k(l) Φ=Softplus (W1),λ(l) Φ=Relu(W2)/exp(1 + 1 /kΦ). (11) 5 Published as a conference paper at ICLR 2025 Note that W1andW2are randomly initialized matrices with dimensions matching Φ. Connection with Non-Negative Matrix Factorization: Equations 8 and 11 ensure that E[θj] =λj andE[Φ] =λΦ. Therefore, instead of sampling θjandΦfrom their respective distributions, substituting their expectations makes the mapping equivalent to that of standard non-negative matrix factorization, resulting in yj=λjλΦ. In other words, if we let the shape parameter kof the Weibull distribution approach infinity—implying that the variance of the latent variables approaches zero, and the distribution collapses into a point mass concentrated at the expectation, then the proposed stochastic decision layer reduces to non-negative matrix factorization. For further discussions on NMF, please refer to Appendix A.3. 3.4 V ARIATIONAL INFERENCE For BNDL, given the model parameters referred to as Ω, which consist of the parameters in the generative model and inference network, the marginal likelihood of the dataset (X, Y)is defined as: p(Y|X) =Z Z JY j=1p(yj|θj,Φ)p(θj)dθJ j=1dΦ (12) The inference task is to learn the parameters of the generative model and the inference network. Sim- ilar to V AEs, the optimization objective of the BNDL can be achieved by maximizing the evidence lower bound (ELBO) of the log-likelihood as: L(Y) =JX j=1Eq(θj|xj)[lnp(yj|θj,Φ)] −JX j=1Eq(θj|xj) lnq(θj|xj) p(θj) −Eq(Φ|−) lnq(Φ| −) p(Φ)(13) where the first term is the expected log-likelihood of the generative model, which ensures reconstruc- tion performance, and the last two term is the Kullback–Leibler (KL)
https://arxiv.org/abs/2505.22199v1
divergence that constrains the variational distribution q(−)to be close to its prior p(−). The parameters in the Generalized GBN can be directly optimized by advanced gradient algorithms, like SGD (Kingma & Ba, 2015). Complexity analysis Modifying the last layer of base deep neural networks minimally increases parameter count, resulting in negligible space complexity. The time com- plexity of ResNet is dominated by its convolutional layers and can be expressed as: OPL l=1C(l) in×C(l) out×H(l)×W(l)×K(l)×K(l) , where H,W,K×K,CinandCoutare the input height, width, kernel size, input channels, and output channels, respectively. In BNDL, the added time complexity from KL divergence computations for local and global latent variables isO(C(L) out)andO(C(L) out×C), respectively, which are negligible compared to ResNet’s overall complexity. Unlike traditional ensemble methods and dropout approaches, which require multiple full-network runs to assess uncertainty, BNDL performs a single forward pass to infer the variational posterior, followed by sampling for uncertainty estimation, greatly reducing computational costs. 4 T HEORETICAL GUARANTEES FOR BNDL We provide theoretical guarantees for BNDL from the perspective of identifiable features. As de- scribed in Sec. 3.3, BNDL can be viewed as a Non-negative Matrix Factorization (NMF) problem. From this perspective, its objective function p(Y|{Φ,θ, X})can be further reformulated as: min θ≥0,Φ≥0∥Y−θΦ∥2 F(14) In the realm of non-negative matrix factorization, many studies have aimed to establish the iden- tifiability and uniqueness of a decomposition θΦ, up to permutation and scaling. Recently, Gillis & Rajk ´o (2023) demonstrated that a subset of the columns of θandΦcan be identified and made unique under more relaxed conditions. We will show how BNDL adheres to the criteria set forth in Gillis & Rajk ´o (2023), thereby enabling the learning of partially identifiable features. 6 Published as a conference paper at ICLR 2025 Proposition 1 ((Gillis & Rajk ´o, 2023)) .Thek-th column of θis identifiable under the two assump- tions: •Selective Window : There exists a row of Φ, say the j-th, such that Φ(j,:) = αeT (k)for α >0, where eT (k)represents the k-th standard row vector in vector space. •Sparsity Constrain : The k-th column of Φcontains at least r−1entries equal to zero, where ris the rank of Y. The selective window assumption states that the column in θcorresponding to Φ(j,:)is unique in the dataset, which is reasonable in many applications (Gillis, 2020), e.g.,θcan represent latent classes in a classification task, where Wang et al. (2024) achieves full identifiability by assuming each latent class corresponds to a unique sample. Under this assumption, it suffices to have a single latent class with a unique sample, making it more feasible and easier to satisfy. For the sparsity constraint, the use of a gamma prior and a ReLU activation function in Φwithin the BNDL frame- work, as outlined in 5 and 11, enforces sparsity during the training process. Moreover, instead of relying on the parameter norm (Wong et al., 2021), which often leads to performance degradation, we propose a more scalable and effective adaptive activation function to achieve sparsity. Specifi- cally, we employ f(x)= ReLU (x−α), where αis a predefined constant, set as a hyperparameter. This approach offers
https://arxiv.org/abs/2505.22199v1
a more flexible mechanism for inducing sparsity without sacrificing model per- formance. Similarly, we can demonstrate the partial identifiability of Φ, as it is often considered to be the transpose of θ(Fu et al., 2017; HaoChen et al., 2021). In conclusion, BNDL follows the aforementioned assumptions, and its optimization objective promotes the partial identifiability of the learned features and decision layer, thereby enhancing their disentanglement capability. We validated our theoretical guarantees through experiments presented in Sec. 5.2. More details for the theoretical guarantees can be found in Appendix A.2. 5 E XPERIMENTS Experiment Setup Following Wong et al. (2021), we analyze the following models: (a) ResNet classifiers—ResNet-50 trained on ImageNet-1k (Deng et al., 2009; Russakovsky et al., 2015) and Places-10 (a subset of Places365 (Zhou et al., 2017)), and ResNet-18 trained on CIFAR-10/100 (Krizhevsky, 2009); (b) a ViT-based model (Dosovitskiy et al., 2021) with pretrained weights from Hugging Face *. Baselines are detailed in Sec. 5.1. We evaluate accuracy and uncertainty for quantitative analysis and use LIME for qualitative insights into BNDL predictions. Setup details, including hyperparameters and datasets, are provided in Appendix A.1. Our code is available at https://github.com/XYHu122/BNDL. Uncertainty evaluation metric. We estimate uncertainty using a hypothesis testing approach (Fan et al., 2021). This method provides interpretable p-values, enabling practical deployment for binary uncertainty decisions. A prediction’s certainty is determined by comparing its p-value against a threshold. To evaluate uncertainty estimates, we use the Patch Accuracy vs. Patch Uncertainty (PAvPU) metric (Mukhoti & Gal, 2018), which defined as PAvPU = ( nac+niu)/(nac+nau+nic +niu), where nac,nau,nic, and niurepresent the counts of accurate-certain, accurate-uncertain, inaccurate-certain, and inaccurate-uncertain samples, respectively. Higher PAvPU values indicate that the model reliably produces accurate predictions with high certainty and inaccurate ones with high uncertainty. Sparsity Measurement We measure the sparsity of the final decision layer weights. Since the weights in our method are non-negative, we follow the approach of (Wong et al., 2021) and (Wang et al., 2024), considering weights greater than 1×10−5as non-sparse (denoted as lnon) and the remaining values as sparse (denoted as lsparse ). The sparsity is calculated as nnz=lnon/(lnon+ lsparse ), where a smaller value indicates a higher level of sparsity. 5.1 C LASSIFICATION PERFORMANCE Specifically, for each dataset, we conducted the following experiments: *https://huggingface.co/google/vit-base-patch16-224 7 Published as a conference paper at ICLR 2025 Table 1: Overall model accuracy across different datasets, with BNDL being our method. We use ResNet-50 as the baseline for ImageNet-1k, and ResNet-18 as the baseline for CIFAR-10 and CIFAR-100. Vit refers to vit-base-patch16-224. Model CIFAR-10 CIFAR-100 ImageNet-1k ACC PAvPU ACC PAvPU ACC PAvPU ResNet 94.98±0.12 - 74.62±0.23 - 75.33±0.14 - MC Dropout 94.54±0.03 78.83 ±0.12 78.12±0.06 64.41 ±0.22 75.98±0.08 76.50 ±0.02 BM 94.07±0.07 93.98 ±0.3 75.81±0.34 77.13 ±0.67 - - CARD 90.93±0.02 91.11 ±0.04 71.42±0.01 71.48 ±0.03 76.20±0.00 76.29 ±0.01 ResNet-BNDL 95.54±0.08 95.58±0.20 79.82±0.13 81.1±0.21 77.01±0.14 77.66±0.03 ViT-Base 95.51±0.03 - 84.15±0.03 - 80.33 - ViT-BNDL 96.34±0.04 97.01±0.02 85.16±0.03 86.37±0.11 81.29±0.02 82.50±0.03 (a) Performance and Uncertainty Evaluation We replaced the decision layers in the network with BNDL and performed supervised training from scratch. The results are shown in Table
https://arxiv.org/abs/2505.22199v1
1. The baseline models are grouped into two categories: 1)Uncertainty estimation networks, including Bernoulli MC Dropout(Gal & Ghahramani, 2016), BM (Joo et al., 2020) and CARD (Han et al., 2022) 2)Dense decision layer baselines: including ViT-Base (Dosovitskiy et al., 2021) (We used the pretrained weights for the vit-base-patch16-224 only, modifying the decision layer for continued training.) and ResNet (He et al., 2016). It is important to note that the goal of BNDL is not to achieve a substantial improvement in the model’s performance, but rather to preserve the model’s performance while enhancing its interpretability and uncertainty estimation capabilities. (b) Impact of Sparsity. We replaced the decision layers of a pre-trained model with BNDL, froze the existing feature layers, and fine-tuned only the parameters of the BNDL. The results across different datasets are illustrated in Fig. 3, where we compare BNDL (shown in blue) with the Debuggable Network (Wong et al., 2021) (shown in orange), both of them utilize the same backbone with different sparse decision layers. 5.1.1 PERFORMANCE AND UNCERTAINTY EVALUATION In Table 1 , we show accuracy and PAvPU. Our model reports the mean and variance across 5 differ- ent random seeds, while the results of other models are reported from previous papers if available. Since we directly used the source *to test on ImageNet-1k, the variance term is not provided in the table. We can observe that: 1)By leveraging stochastic latent variables to capture complex de- pendencies, BNDL consistently outperformed all datasets and demonstrated improved performance across various widely-used architectures, including ResNet and ViT. 2)The integration of BNDL en- dowed the model with the capability for uncertainty estimation, as evidenced by the improvements in PAvPU metrics when compared to several strong baselines. 3)BNDL exhibits scalability and can be extended to larger datasets, such as ImageNet-1k, as demonstrated by the complexity analysis. High Uncertainty Coral ReefRed WineMobile HomeSea UrchinWine BottlePickup TruckLow Uncertainty Pill PenMagpie Pill PenPill PenPill Pen Magpie Figure 2: The leftmost line chart illustrates the average uncertainty and accuracy across subsets of the ImageNet test set. The middle and right panels sample images from the subsets with the highest and lowest uncertainty, as defined by the curve. The top row shows the original images with ground truth labels, while the bottom row displays the model’s predictions alongside LIME visualizations. 8 Published as a conference paper at ICLR 2025 Relation between Uncertainty and Accuracy We conducted an ablation study to explore the re- lationship between prediction uncertainty and downstream performance. Using ResNet-BNDL, we sorted the ImageNet test set by evaluation uncertainty into 10 subsets. For each subset, we calcu- lated the average accuracy and uncertainty, then plotted the results in Fig. 2. We also selected images from the highest and lowest uncertainty subsets and visualized their LIME explanations, showing the most influential features of the activations on the right side of the figure. The line chart illustrates a clear negative correlation between uncertainty and accuracy: higher uncertainty corresponds to lower accuracy. This suggests that the model provides reliable uncertainty estimates, helping to avoid potential misclassifications. In the visualization, we observe
https://arxiv.org/abs/2505.22199v1
that the model made correct pre- dictions for images with low uncertainty, while for images with high uncertainty, the visualizations reveal the causes of misclassification, e.g., in the image of a wine bottle, the model primarily fo- cused on the wine glass filled with red wine in the background, leading to a misclassification as red wine. The Uncertainty Vs Acc results for additional datasets and ViT-BNDL are provided in the Appendix A.3. 5.1.2 I MPACT OF SPARSITY 102 101 100 1 - Sparsity92.593.093.594.094.595.0Test Accuracy (%) BNDL Debuggable (a) Cifar-10 104 103 102 101 100 1 - Sparsity10203040506070Test Accuracy (%) BNDL Debuggable (b) Imagenet-1k 104 103 102 101 100 1 - Sparsity10203040506070Test Accuracy (%) BNDL Debuggable (c) Cifar-100 103 102 101 100 1 - Sparsity455055606570758085Test Accuracy (%) BNDL Debuggable (d) Places-10 Figure 3: Sparsity-accuracy trade-offs for BNDL and Debuggable Network (Wong et al., 2021). Each point on the curve represents a BNDL classifier. Horizontal dashed lines indicate the fully dense accuracy for each network. The x-axis shows the proportion of non-sparse weights, with higher values indicating denser distributions, while the y-axis represents test accuracy on each dataset. The results of fine-tuning on the original model are shown in Fig. 3. For each dataset, we control the sparsity according to the activation function mentioned in Eq. 11, resulting in the Sparsity vs. accuracy curve. The blue lines represent our method, while the orange lines represent the Debuggable Network (Wong et al., 2021), which also employs the idea of a sparse decision layer. It can be observed that at the same level of sparsity, our model overall outperforms the Debuggable Network across different datasets. Additionally, across different datasets, the decision layer can be made substantially sparser—by up to two orders of magnitude—with minimal impact on accuracy (cf. Fig. 3(a)). For instance, when the sparsity of the ImageNet-1k classifier is 0.0024, the network’s classification accuracy still reaches 75.7%. 5.2 I NTERPRETABILITY EVALUATION Disentangled representation learning To validate the disentanglement on real-world data, we adopt an unsupervised disentanglement metric SEPIN@ k(Do & Tran, 2019). SEPIN@ kmeasures how each feature θiis disentangled from others θ̸=iby computing their conditional mutual informa- tion with the top kfeatures, i.e., SEPIN@ k= 1/kPk i=1I(x,θri|θ̸=ri), which are estimated with InfoNCE lower bound (Oord et al., 2018) implemented following (Wang et al., 2024). Table 2: Feature disentanglement score on Imagenet-1k, where @k denotes the top-k dimensions. Values are scaled by 102, we use ResNet-50 as the baseline for BNDL SEPIN@1 SEPIN@10 SEPIN@100 SEPIN@1000 SEPIN@all ResNet50 1.50 ±0.02 1.03 ±0.01 0.60 ±0.01 0.31 ±0.01 0.23 ±0.01 ResNet50-BNDL 2.59±0.03 2.12 ±0.01 1.30 ±0.01 0.65 ±0.01 0.44 ±0.01 9 Published as a conference paper at ICLR 2025 ResNet 50BNDLResNet 50Arctic FoxCandle Labrador Drake Payphone Trolleybus Egyptian Cat Figure 4: The LIME visualizations for BNDL and ResNet-50, focusing on the largest θfor each image, show that BNDL’s features align more closely with the semantic meaning of true labels, sug- gesting more disentangled representations. As we selected the top-10 super-pixels for visualization, the results may include some less significant super-pixels; this issue is alleviated when we reduce the number
https://arxiv.org/abs/2505.22199v1
of top- ksuper-pixels. As shown in Table 2, BNDL features exhibit much better disentanglement than ResNet-50 across all top-kdimensions. The advantage is more pronounced when considering the top features, as learned features also contain noise dimensions. This verifies the disentanglement of learned features, as analyzed in Sec. 4. BNDL indeed provides better feature disentanglement on real-world data. The disentanglement results on other datasets can be found in Appendix A.3. Visualization We visualized the feature representation θof BNDL and the baseline model (ResNet-50) for the same images in ImageNet-1k, as illustrated in Fig. 4. Specifically, we se- lected the feature θwith the highest activation for each image and applied the LIME visualization, in line with the approach used in Fig. 2. The top row of Fig. 4 shows the true categories of the corresponding images, the second row presents our visualization results, and the third row displays the visualization results of the baseline model. Overall, the visualization results of BNDL are more semantically meaningful compared to those of ResNet-50. For instance, in the image of a candle, BNDL successfully captures parts of the candle, while ResNet-50 only identifies the cake. Similar observations occur in other categories, and we provide additional visualization results in Appendix A.3. This finding aligns with the conclusions drawn in 4, suggesting that BNDL has learned more identifiable features through the constraint of sparsity. 6 C ONCLUSION We introduce BNDL as a simple and scalable Bayesian decision layer that excels in both uncertainty estimation and interpretability, while maintaining or improving accuracy across a range of tasks, in- cluding large-scale applications. With an efficient parameterization of the covariance-dependent variational distribution, BNDL enhances the flexibility of DNNs with only a slight increase in mem- ory and computational cost. We demonstrate the broad applicability of BNDL on both ResNet-based and ViT-based models and show that BNDL achieves superior performance compared to these base- lines. Notably, we provide both practical and theoretical guarantees for BNDL’s ability to learn more disentangled and identifiable features. Based on these results, we believe BNDL can serve as an efficient alternative to decision layer in the versatile tool box of modules. REPRODUCILTLY STATEMENT The novel methods introduced in this paper are accompanied by detailed descriptions (Sec. 3), and their implementations are provided at https://github.com/XYHu122/BNDL. 10 Published as a conference paper at ICLR 2025 ACKNOWLEDGMENTS The work of X. Hu, Z. Duan, and B. Chen was supported in part by the National Natural Science Foundation of China under Grant U21B2006; in part by Shaanxi Youth Innovation Team Project; in part by the Fundamental Research Funds for the Central Universities QTZX23037 and QTZX22160; and in part by the 111 Project under Grant B18039. REFERENCES Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. Sanity checks for saliency maps. Advances in neural information processing systems , 31, 2018. Alexander Amini, Wilko Schwarting, Ava Soleimany, and Daniela Rus. Deep evidential regression. Advances in neural information processing systems , 33:14927–14937, 2020. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Man ´e. Con- crete problems in ai
https://arxiv.org/abs/2505.22199v1
safety. arXiv preprint arXiv:1606.06565 , 2016. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In International conference on machine learning , pp. 1613–1622. PMLR, 2015. Junyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Ben- gio. A recurrent latent variable model for sequential data. Advances in neural information pro- cessing systems , 28, 2015. Erik Daxberger, Agustinus Kristiadi, Alexander Immer, Runa Eschenhagen, Matthias Bauer, and Philipp Hennig. Laplace redux-effortless bayesian deep learning. Advances in Neural Information Processing Systems , 34:20089–20103, 2021. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hi- erarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pp. 248–255. Ieee, 2009. Shehzaad Dhuliawala, Mrinmaya Sachan, and Carl Allen. Variational classification. Trans. Mach. Learn. Res. , 2024, 2023a. URL https://api.semanticscholar.org/CorpusID: 258740937 . Shehzaad Dhuliawala, Mrinmaya Sachan, and Carl Allen. Variational classification. arXiv preprint arXiv:2305.10406 , 2023b. Kien Do and Truyen Tran. Theory and evaluation metrics for learning disentangled representations. arXiv preprint arXiv:1908.09961 , 2019. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszko- reit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representations (ICLR 2021) , 2021. URL https://openreview.net/forum?id=YicbFdNTTy . Zhibin Duan, Dongsheng Wang, Bo Chen, Chaojie Wang, Wenchao Chen, Yewen Li, Jie Ren, and Mingyuan Zhou. Sawtooth factorial topic embeddings guided gamma belief network. In Interna- tional Conference on Machine Learning , pp. 2903–2913. PMLR, 2021. Zhibin Duan, Tiansheng Wen, Muyao Wang, Bo Chen, and Mingyuan Zhou. A non-negative vae: the generalized gamma belief network. arXiv preprint arXiv:2408.03388 , 2024. Xinjie Fan, Shujian Zhang, Korawat Tanwisuth, Xiaoning Qian, and Mingyuan Zhou. Contextual dropout: An efficient sample-dependent dropout module. In International Conference on Learn- ing Representations , 2021. URL https://openreview.net/forum?id=ct8_a9h1M . Xiao Fu, Kejun Huang, and Nicholas D. Sidiropoulos. On identifiability of nonnegative ma- trix factorization. IEEE Signal Processing Letters , 25:328–332, 2017. URL https://api. semanticscholar.org/CorpusID:3483466 . 11 Published as a conference paper at ICLR 2025 Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning , pp. 1050–1059. PMLR, 2016. Nicolas Gillis. Nonnegative matrix factorization . SIAM, 2020. Nicolas Gillis and R ´obert Rajk ´o. Partial identifiability for nonnegative matrix factorization. SIAM Journal on Matrix Analysis and Applications , 44(1):27–52, 2023. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. In International conference on machine learning , pp. 1321–1330. PMLR, 2017. Xizewen Han, Huangjie Zheng, and Mingyuan Zhou. Card: Classification and regression diffu- sion models. ArXiv , abs/2206.07275, 2022. URL https://api.semanticscholar.org/ CorpusID:249674651 . Jeff Z. HaoChen, Colin Wei, Adrien Gaidon, and Tengyu Ma. Provable guarantees for self- supervised deep learning with spectral contrastive loss. In Neural Information Processing Sys- tems, 2021. URL https://api.semanticscholar.org/CorpusID:235367888 . James Harrison, John Willes, and Jasper Snoek. Variational bayesian last layers. arXiv preprint arXiv:2404.11599 , 2024. Kaiming He,
https://arxiv.org/abs/2505.22199v1
Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pp. 770–778, 2016. Jos´e Miguel Hern ´andez-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learn- ing of bayesian neural networks. In International conference on machine learning , pp. 1861– 1869. PMLR, 2015. Patrik O Hoyer. Non-negative matrix factorization with sparseness constraints. Journal of machine learning research , 5(9), 2004. Kejun Huang, Nicholas D Sidiropoulos, and Ananthram Swami. Non-negative matrix factorization revisited: Uniqueness and algorithm for symmetric decomposition. IEEE Transactions on Signal Processing , 62(1):211–224, 2013. Taejong Joo, Uijung Chung, and Minji Seo. Being bayesian about categorical probability. In Interna- tional Conference on Machine Learning , 2020. URL https://api.semanticscholar. org/CorpusID:211171911 . Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings , 2015. URL http: //arxiv.org/abs/1412.6980 . Diederik P Kingma and Max Welling. Stochastic gradient vb and the variational auto-encoder. In Second international conference on learning representations, ICLR , volume 19, pp. 121, 2014. Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. Advances in neural information processing systems , 27, 2014. Durk P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameteri- zation trick. Advances in neural information processing systems , 28, 2015. David A Knowles. Stochastic gradient variational bayes for gamma approximating distributions. arXiv preprint arXiv:1509.01631 , 2015. Agustinus Kristiadi, Matthias Hein, and Philipp Hennig. Being bayesian, even just a bit, fixes overconfidence in relu networks. In International conference on machine learning , pp. 5436– 5446. PMLR, 2020. 12 Published as a conference paper at ICLR 2025 A Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, University of Tront , 2009. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems , 30, 2017. Matthew L Leavitt and Ari Morcos. Towards falsifiable interpretability research. arXiv preprint arXiv:2010.12016 , 2020. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature , 521(7553):436–444, 2015. Daniel D Lee and H Sebastian Seung. Learning the parts of objects by non-negative matrix factor- ization. Nature , 401(6755):788–791, 1999. Chunyuan Li, Changyou Chen, David Carlson, and Lawrence Carin. Preconditioned stochastic gradient langevin dynamics for deep neural networks. In Proceedings of the AAAI conference on artificial intelligence , volume 30, 2016. Shiwei Liu, Tianlong Chen, Zahra Atashgahi, Xiaohan Chen, Ghada Sokar, Elena Mocanu, Mykola Pechenizkiy, Zhangyang Wang, and Decebal Constantin Mocanu. Deep ensembling with no over- head for either training or testing: The all-round blessings of dynamic sparsity. arXiv preprint arXiv:2106.14568 , 2021. Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution detec- tion. Advances in neural information processing systems , 33:21464–21475, 2020. Christos Louizos and Max Welling. Multiplicative normalizing flows for variational bayesian neural networks. In International Conference on Machine Learning , pp. 2218–2227. PMLR,
https://arxiv.org/abs/2505.22199v1
2017. Andrey Malinin and Mark Gales. Predictive uncertainty estimation via prior networks. Advances in neural information processing systems , 31, 2018. Andrey Malinin, Sergey Chervontsev, Ivan Provilkov, and Mark Gales. Regression prior networks. arXiv preprint arXiv:2006.11590 , 2020. Arshia Soltani Moakhar, Eugenia Iofinova, and Dan Alistarh. Spade: Sparsity-guided debugging for deep neural networks. arXiv preprint arXiv:2310.04519 , 2023. Gemma E Moran, Dhanya Sridhar, Yixin Wang, and David M Blei. Identifiable deep generative models via sparse decoding. arXiv preprint arXiv:2110.10804 , 2021. Jishnu Mukhoti and Yarin Gal. Evaluating bayesian deep learning methods for semantic segmenta- tion. arXiv preprint arXiv:1811.12709 , 2018. Christian Naesseth, Francisco Ruiz, Scott Linderman, and David Blei. Reparameterization gradients through acceptance-rejection sampling algorithms. In Artificial Intelligence and Statistics , pp. 489–498. PMLR, 2017. Anh Nguyen, Jason Yosinski, and Jeff Clune. Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks. arXiv preprint arXiv:1602.03616 , 2016. Sebastian W Ober and Carl E Rasmussen. Benchmarking the neural linear model for regression. In Second Symposium on Advances in Approximate Bayesian Inference , 2019. Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. Feature visualization. Distill , 2(11):e7, 2017. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predic- tive coding. arXiv preprint arXiv:1807.03748 , 2018. 13 Published as a conference paper at ICLR 2025 Jayneel Parekh, Sanjeel Parekh, Pavlo Mozharovskyi, Florence d’Alch ´e Buc, and Ga ¨el Richard. Listen to interpret: Post-hoc interpretability for audio networks with nmf. Advances in Neural Information Processing Systems , 35:35270–35283, 2022. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining , pp. 1135–1144, 2016. Francisco R Ruiz, Titsias RC AUEB, David Blei, et al. The generalized reparameterization gradient. Advances in neural information processing systems , 29, 2016. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision , 115:211–252, 2015. Ramprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based local- ization. In Proceedings of the IEEE international conference on computer vision , pp. 618–626, 2017. Murat Sensoy, Lance Kaplan, and Melih Kandemir. Evidential deep learning to quantify classifica- tion uncertainty. Advances in neural information processing systems , 31, 2018. Jiaxin Shi, Shengyang Sun, and Jun Zhu. Kernel implicit variational inference. arXiv preprint arXiv:1705.10119 , 2017. Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Vi- sualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 , 2013. Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. Advances in neural information processing systems , 28, 2015. Marcin Tomczak, Siddharth Swaroop, Andrew Foong, and Richard Turner. Collapsed variational bounds for bayesian neural networks. Advances in Neural Information Processing Systems , 34: 25412–25426, 2021. Yifei Wang, Qi Zhang, Yaoyu Guo,
https://arxiv.org/abs/2505.22199v1
and Yisen Wang. Non-negative contrastive learning. In The Twelfth International Conference on Learning Representations , 2024. Joe Watson, Jihao Andreas Lin, Pascal Klink, Joni Pajarinen, and Jan Peters. Latent derivative bayesian last layer networks. In International Conference on Artificial Intelligence and Statistics , pp. 1198–1206. PMLR, 2021. Noah Weber, Janez Starc, Arpit Mittal, Roi Blanco, and Llu ´ıs M`arquez. Optimizing over a bayesian last layer. In NeurIPS workshop on Bayesian Deep Learning , 2018. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning , 8:229–256, 1992. Eric Wong, Shibani Santurkar, and Aleksander Madry. Leveraging sparse linear layers for de- buggable deep networks. In International Conference on Machine Learning , pp. 11205–11216. PMLR, 2021. Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 , 2015. Byron M Yu, John P Cunningham, Gopal Santhanam, Stephen Ryu, Krishna V Shenoy, and Ma- neesh Sahani. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. Advances in neural information processing systems , 21, 2008. Hao Zhang, Bo Chen, Dandan Guo, and Mingyuan Zhou. Whai: Weibull hybrid autoencoding inference for deep topic modeling. arXiv preprint arXiv:1803.01328 , 2018. 14 Published as a conference paper at ICLR 2025 Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence , 40(6):1452–1464, 2017. Mingyuan Zhou, Lauren Hannah, David Dunson, and Lawrence Carin. Beta-negative binomial pro- cess and Poisson factor analysis. In Artificial Intelligence and Statistics , pp. 1462–1471. PMLR, 2012. Mingyuan Zhou, Yulai Cong, and Bo Chen. The Poisson gamma belief network. Advances in Neural Information Processing Systems , 28, 2015. 15 Published as a conference paper at ICLR 2025 A A PPENDIX /SUPPLEMENTAL MATERIAL A.1 E XPERIMENTAL SETTING All experiments are conducted on Linux servers equipped with 32 AMD EPYC 7302 16-Core Pro- cessors and 2 NVIDIA 3090 GPUs. Models are implemented in PyTorch version 1.12.1, scikit-learn version 1.0.2 and Python 3.7. The CIFAR-10, CIFAR-100, and ImageNet-1k datasets we used are all publicly available standard datasets. As for Places-10, it is a subset of Places365 (Zhou et al., 2017) containing the classes “airport terminal”, “boat deck”, “bridge”, “butcher’s shop”, “church- outdoor”, “hotel room”, “laundromat”, “river”, “ski slope” and “volcano”. We chose this dataset to ensure a fair comparison with DebuggableNetworks (Wong et al., 2021), as they also selected this subset. •Training from scratch setup For ResNet-18 on CIFAR-10 and CIFAR-100, we set the batch size to 128, learning rate to 0.1, training epochs to 150, and weight decay to 5e-4. For ResNet-50 on ImageNet-1k, we set the batch size to 256, weight decay to 1e-4, epochs to 200, and learning rate to 0.1. For the calculation of PAvPU, we sampled 20 times to obtain 20 different logits, and then selected the top-2 classes based on their mean logits, performing a two-sample t-test to obtain the p-value. Following the approach used by CARD and MC Dropout models, we set the p-value threshold to 0.05 to determine
https://arxiv.org/abs/2505.22199v1
whether a sample is classified as certain. •Fine-tuning setup For Places-10, we set the learning rate to 0.1, batch size to 128, and epochs to 100. For ImageNet-1k, CIFAR-10, and CIFAR-100, we set the learning rate to 0.001 and epochs to 200. •Sparsity Vs Accuracy in Sec. 5.1.2 We follow the default settings of (Wong et al., 2021) when running the Debuggable baselines, and we run BNDL using the settings from the fine-tuning setup. It is worth mentioning that in Debuggable Networks, the sparsity of the decision layer is controlled via the elastic net, which adjusts the sparsity of decision layer weights through the regularization path, this kind of parameter norm often leads to performance degradation. In contrast, BNDL increases the sparsity of the decision layer by using a gamma distribution as the prior. Additionally, we control the sparsity of the weights by applying an activation function to the weights, w′=ReLU (w−α), where αis a predefined constant set as a hyperparameter. Visualization tool (LIME) Traditionally, LIME is used to obtain instance-specific explana- tions—i.e., to identify the super-pixels in a given test image that are most responsible for the model’s prediction. In our setting, we follow this intuition and use the following two step-procedure to obtain LIME-based feature interpretations: (i)We randomly select an image category and then randomly choose Kimages from that category. For each image, we identify the feature with the maximum activation for visualization. Among them, our definition of maximum activation is that we select the maximum weight of the category index corresponding to the image in Φ, and select the feature neuron multiplied by this weight. (ii)Run LIME on each of these examples to identify relevant super-pixels. At a high level, this involves performing linear regression to map image super-pixels to the (normalized) activation of the deep feature (rather than the probability of a specific class as is typical). A.2 T HEORETICAL GUARANTEES FOR BNDL Problem Formulation As described in Sec. 3.3, BNDL can be viewed as a Non-negative Matrix Factorization (NMF) problem. From this perspective, its objective function p(Y|{Φ,θ, X})can be further reformulated as: min θ≥0,Φ≥0∥Y−θΦ∥2 F(15) This property enables NMF to accurately recover the ground truth factors that generated the data. Following the definition in (Gillis & Rajk ´o, 2023), we first define an exact NMF (that is, an errorless reconstruction) as follows: Definition 1. (Excat NMF of size r) Given a nonnegative matrix Y∈Rm×n, the decomposition θΦ where θ∈Rm×r + andΦ∈Rr×n +is an exact NMF of Yof size rifY=θΦ. 16 Published as a conference paper at ICLR 2025 The formally defined identifiability of an Exact NMF is as follows: Definition 2. (Identifiability of Exact NMF) The Exact NMF of Y=θ∗Φ∗of size ris identifiable if and only if for any other Exact NMF of Y=θΦof size r, there exists a permutation matrix Π∈ {0,1}r×rand a nonsingular diagonal scaling matrix Dsuch that: θ=θ∗ΠDandΦ=D−1ΠΦ∗ (16) Intuitively, Definition 2 indicates that all columns of θandΦmust be identifiable. Achieving this typically requires very stringent conditions, such as the requirement for both θandΦto satisfy the so-called sufficiently scattered condition (SSC) (Huang et al.,
https://arxiv.org/abs/2505.22199v1
2013). Therefore, we concentrate on the partial identifiability of BNDL, which similarly ensures the identifiability and uniqueness of a subset of columns of θandΦunder more relaxed conditions. Partially Identifiable Features To demonstrate that BNDL is partially identifiable, we first present the definition of partial identifiability in exact NMF. Definition 3. (Partial identifiability in Exact NMF) Let Y=θ∗Φ∗be an exact NMF of Yof size r. Thek-th column of θ∗is identifiable if and only if for any other Exact NMF of Y=θΦof size r, there exits an index set jand a scalar α >0such that: θ(:, j) =αθ∗(:, k) (17) Similarly, we can define the identifiability of the kth column of Φ∗using symmetry. Previous work (Gillis & Rajk ´o, 2023) has shown that Definition 3 can hold under two relatively relaxed assumptions. Proposition 2. (Gillis & Rajk ´o, 2023) Let Y=θ∗Φ∗where θ∗∈Rm×r + andΦ∗∈Rr×n + with rank(Y) = r. Without loss of generality, assume Y,Φ∗andθare column stochastic. The k-th column ofθ∗is identifiable if it satisfies the following two conditions: •(Selective Window ) There exists a row of Φ∗, say the j-th, such that Φ∗(j,:) =αeT (k)for α >0. •(Sparsity Constrain ) There exists a subset Jofr−1columns of Y, namely Y(:,J), such thatrank (Y(:,J)) =r−1and for all j∈ J: Fθ∗(θ∗(:, k))∩ Fθ∗(R(:, j)) =∅ (18) which means that the minimal face containing the k-th column of θ∗does not intersect with the minimal faces containing the columns of Y(:,J). i)For the Selective Window assumption : Intuitively, this assumption means that the column in θ (latent class) corresponding to the Φ∗(j,:)appears uniquely in the dataset, which is reasonable in many applications (Gillis, 2020). E.g., in classification tasks (Wang et al., 2024), the authors achieve full identifiability by positing that each latent class has a unique sample. In the context of selective window assumption, we only need to assume the presence of a single latent class with a unique sample to satisfy the selective window assumption, which makes it more feasible and easier to achieve. ii)For the Sparsity Constrain : This condition implies that the k-th column of Φ∗contains at least r−1entries equal to zero, namely, Φ∗(J, k) = 0 . Due to the use of a gamma prior and a ReLU function in Φwithin BNDL, as shown in 5 and 11 respectively, Φis enforced to be sparse during the training process. Additionally, in Sec. 5.1.1, we demonstrate the high sparsity of BNDL, for instance, the 1-sparsity of decision weights for ImageNet is only 0.04. Only a small portion of the weights have a decisive impact on the final results, indicating that BNDL satisfies the sparsity constraint. In summary, our theory demonstrates that BNDL satisfies the assumptions outlined in 2, facilitating the partial identifiability of the learned features. A.3 A DDITIONAL EXPERIMENTAL RESULTS Uncertainty Evaluation We provide additional ablation study results on Fig. 5, with experimental settings consistent with Sec. 5.1.1. For plotting the curves, we used B-spline interpolation to gener- ate smooth curves, setting k= 3(cubic spline). The line charts illustrate a clear negative correlation 17 Published as a conference paper at ICLR 2025 between uncertainty and
https://arxiv.org/abs/2505.22199v1
accuracy: higher uncertainty corresponds to lower accuracy. This suggests that the model provides reliable uncer- tainty estimates, helping to avoid potential misclassifications. 0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 Uncertainty707580859095100Test Accuracy (%) cifar-10 (a) ResNet-18 Cifar-10 0.0 2.5 5.0 7.5 10.0 12.5 15.0 Uncertainty405060708090100Test Accuracy (%) Cifar-100 (b) ResNet-18 Cifar-100 0 20 40 60 80 100 Uncertainty405060708090100Test Accuracy (%) Imagenet (c) ViT Imagenet-1k Figure 5: Uncertainty Vs Test Acc Curve on other datasets and model. Table 3: Feature score on Cifar-100, where @k denotes the top-k dimensions. Values are scaled by 102 Cifar-100 SEPIN@1 SEPIN@10 SEPIN@100 SEPIN@1000 SEPIN@all ResNet18 3.17 ±0.06 2.40 ±0.03 1.79 ±0.03 1.21 ±0.01 1.21 ±0.01 BM 3.19 ±0.05 2.53 ±0.02 1.17 ±0.02 1.28 ±0.02 1.24 ±0.01 ResNet18-NMF 3.21 ±0.03 2.95 ±0.02 1.87 ±0.03 1.33 ±0.02 1.23 ±0.03 BNDL 3.91±0.06 3.43 ±0.03 2.77 ±0.03 1.69 ±0.03 1.69 ±0.02 Disentangled Measurement Disentanglement Result on BNDL, BM, ResNet-NMF and ResNet18. Among them, ResNet-NMF represents a framework where non-negativity constraints are applied to both the weights of the decision layer and the input features, transforming the problem into one of non-negative matrix factorization. It can be observed that BNDL consistently achieves the best disentanglement metrics, aligning with the conclusions drawn in Section 5.2 of the paper. This suggests that BNDL has successfully learned more identifiable features through the constraint of sparsity. Table 4: Accuracy and weight sparsity results of Cifar-100, where ResNet-NMF applied non- negativity constraints to both the input features and weights of decision layer. Cifar-100 Accurarcy 1-Sparsity ResNet18 74.62±0.23 0.97±0.02 ResNet18-NMF 76.73±0.16 0.23±0.01 ResNet18-BNDL 79.82±0.13 0.12±0.01 Additional Discussion on the relation of BNDL and Non-negative Matrix Factorization We now provide a more detailed explanation of the relationship between NMF and interpretability. Specifically, the non-negativity constraint in NMF ensures that the decomposition of the data ma- trix can be interpreted as an ”additive combination” rather than a complex mathematical operation involving cancellations between positive and negative terms. This characteristic aligns more closely with how humans naturally understand patterns in real-world data. Therefore, both BNDL and Wang et al. (2024) leverage the principles of NMF by modeling the final layer of the network as an NMF problem, employing non-negativity constraints to enhance interpretability. However, there remain key differences between BNDL and Wang et al. (2024), which include the following: 1) NCL remains a point-estimation model that enforces non-negativity constraints by applying a ReLU activation function to the features. In contrast, BNDL probabilistically models the features as non-negative distributions, which enables uncertainty estimation—something that NCL 18 Published as a conference paper at ICLR 2025 does not accommodate; 2) Although NMF naturally provides some degree of disentanglement, it is often insufficient for handling complex data (Hoyer, 2004), necessitating additional constraints to improve this effect. Unlike NCL, which does not introduce such extra constraints, BNDL applies a gamma prior to the factors in matrix decomposition, further enhancing the sparsity and non-linearity of these factors. This additional constraint strengthens both the disentanglement and interpretability of the model. Regarding (Duan et al., 2024), it also recognizes the additive property introduced by non-negativity, which aids in the
https://arxiv.org/abs/2505.22199v1
disentanglement of the network. It leverages the non-negativity and sparsity of the gamma distribution to design a Variational Autoencoder (V AE) generative model, achieving better disentanglement performance compared to Gaussian-V AE. In general, while BNDL shares some similar tools with (Wang et al., 2024) and (Duan et al., 2024), there are notable differences in their network designs and architectures, as well as the distinct objectives they serve in different tasks. we incorporated NMF into a supervised learning framework and conducted experiments on CIFAR- 100. Consistent with the experimental settings in the paper, ResNet-NMF adopts the ResNet18 baseline model. The experiments were performed on an NVIDIA 3090 GPU. We ran five exper- iments with different random seeds and computed the mean and variance of the results. The ex- perimental outcomes are presented in the Table 3 and 4. From the experiments on accuracy and disentanglement, the following observations can be made: (1) Incorporating NMF into the network enhances its performance. By introducing non-negativity constraints on both weights and features, ResNet-NMF achieves more sparse weights and improved disentanglement performance. (2) BNDL consistently achieves the best results on the CIFAR-100 dataset. This can be attributed to its prob- abilistic modeling and the additional sparsity and nonlinear constraints introduced by the gamma prior. These factors lead to greater weight sparsity and further improved disentanglement, aligning with our discussion. Additional Visualization Results We visualized the feature representation θof BNDL and the baseline model (ResNet-50) for the same images in ImageNet-1k, as illustrated in Figures 8 to 12. Specifically, we selected the feature θwith the highest activation for each image and applied the LIME method using the top-10 super-pixels for visualization, in line with the approach used in Fig. 2. We have added comparative visualization results of BNDL and several models on ImageNet-1k, including the uncertainty estimation model BM (Joo et al., 2020) and the sparse decision-layer model Debuggable Networks (Wong et al., 2021), the results are shown in Fig. 6. The top row of Figures shows the true categories of the corresponding images, the second row presents our visualization results, and the third row displays the visualization results of the baseline model. Overall, the visualization results of BNDL are more semantically meaningful compared to those of ResNet-50. We also included the results of post-hoc explanations using GradCAM(Selvaraju et al., 2017), which is based on the concept of Class Activation Mapping. It aims to generate heatmaps by exploiting the relationship between the feature maps of specific layers in the neural network and the final prediction of the classification task, visually highlighting the regions that contribute the most to the prediction of a particular class. In our experiment, we used the most activated class for each image to generate explanations, with the results shown in Fig. 7. It can be observed that BNDL tends to generate more focused heatmaps. For example, in the first image, BNDL only focuses on the region of the dog, while ResNet50 also attends to the surrounding ground. These results further demonstrate that BNDL is inclined to generate more disentangled features. 19 Published as a
https://arxiv.org/abs/2505.22199v1
conference paper at ICLR 2025 BNDLResNet 50DebuggableBM Argiope aurantiaHay Lampshade Egyptian cat Water ouzel Lorikeet Figure 6: The LIME visualization results for BNDL ,ResNet-50 Debuggable Networks and BM, focusing on the largest θfor each image, demonstrate that BNDL’s feature visualization aligns more closely with the semantic meaning of the true labels. This suggests that BNDL has learned more identifiable features. ResNet 50ResNet 50BNDL Figure 7: We applied GradCAM to visualize ResNet50 and BNDL, and the corresponding heatmaps are shown above. The ground truth label for the visualized image is ”Saint Bernard.” It can be observed that the BNDL visualization is more focused, while ResNet exhibits multifaceted behavior. For instance, in the fourth image, ResNet attends to both the person and the dog, whereas BNDL is more disentangled, focusing solely on the dog. 20 Published as a conference paper at ICLR 2025 ResNet 50BNDLResNet 50Arctic fox, white fox, Alopex lagopus Figure 8: The LIME visualization results for BNDL and ResNet-50, focusing on the largest θfor each image, demonstrate that BNDL’s feature visualization aligns more closely with the semantic meaning of the true labels compared to ResNet-50. This suggests that BNDL has learned more identifiable features. ResNet 50BNDLResNet 50Egyptian cat Figure 9: The LIME visualization results for BNDL and ResNet-50, focusing on the largest θfor each image, demonstrate that BNDL’s feature visualization aligns more closely with the semantic meaning of the true labels compared to ResNet-50. This suggests that BNDL has learned more identifiable features. 21 Published as a conference paper at ICLR 2025 ResNet 50BNDLResNet 50loupe, jeweler's loupe Figure 10: The LIME visualization results for BNDL and ResNet-50, focusing on the largest θfor each image, demonstrate that BNDL’s feature visualization aligns more closely with the semantic meaning of the true labels compared to ResNet-50. This suggests that BNDL has learned more identifiable features. ResNet 50BNDLResNet 50Redbone Figure 11: The LIME visualization results for BNDL and ResNet-50, focusing on the largest θfor each image, demonstrate that BNDL’s feature visualization aligns more closely with the semantic meaning of the true labels compared to ResNet-50. This suggests that BNDL has learned more identifiable features. 22 Published as a conference paper at ICLR 2025 ResNet 50BNDLResNet 50trolleybus, trolley coach, trackless trolley Figure 12: The LIME visualization results for BNDL and ResNet-50, focusing on the largest θfor each image, demonstrate that BNDL’s feature visualization aligns more closely with the semantic meaning of the true labels compared to ResNet-50. This suggests that BNDL has learned more identifiable features. 23
https://arxiv.org/abs/2505.22199v1
arXiv:2505.22200v1 [cs.CV] 28 May 2025Investigating Mechanisms for In-Context Vision Language Binding Darshana Saravanan Makarand Tapaswi Vineet Gandhi CVIT, IIIT Hyderabad, India Abstract To understand a prompt, Vision-Language models (VLMs) must perceive the image, comprehend the text, and build as- sociations within and across both modalities. For instance, given an ‘image of a red toy car’, the model should asso- ciate this image to phrases like ‘car’, ‘red toy’, ‘red object’, etc. Feng and Steinhardt [4] propose the Binding ID mecha- nism in LLMs, suggesting that the entity and its correspond- ing attribute tokens share a Binding ID in the model activa- tions. We investigate this for image-text binding in VLMs using a synthetic dataset and task that requires models to associate 3D objects in an image with their descriptions in the text. Our experiments demonstrate that VLMs assign a distinct Binding ID to an object’s image tokens and its tex- tual references, enabling in-context association. 1. Introduction As Vision-Language models (VLMs) like Gemini [15] and GPT-4o [6] become ubiquitous, it is crucial to understand how they function to determine why they respond the way they do, especially in safety-critical applications. A funda- mental ability of VLMs is to associate information across an image and text to reason about a query. For example, given an image of a furniture store that has a chair with a yellow tag and the caption All furniture with a yellow tag have a 30% discount , a VLM should be able to infer that the chair has a discounted selling price. Our goal is to study this ability to bind objects in an image to information in text. To this end, we propose the Shapes task, a controlled synthetic task that requires models to associate 3D objects in an image with their references in the text. In Fig. 1, the image contains two 3D objects: a green sphere and a red cube . The green sphere is referred to as the green object in the context. So, to answer the question ‘What does the sphere contain?’ , the model needs to internally learn that the sphere corresponds to the phrase green object: is(‘green sphere patches’, ‘green object’) , and that this green object contains item P: contains(‘green object’, ‘item P’) . The binding ID mechanism proposed in [4] suggests that LLMs’ internal activations represent binding information Answer the question based on the provided image and the context below. Context: The green object contains item P . The red object contains item I . Question: Which item does the sphere contain? Answer: The sphere contains item <token_p> Figure 1. Shapes Task. Given an image with two 3D objects and a text description (context), the model needs to comprehend the question and identify the correct item ( token p) contained in the queried object. Image and text tokens highlighted with the same color are expected to contain the same binding IDs, allowing the model to predict the correct answer. by attaching binding ID vectors to the corresponding en- tities and attributes. We investigate whether VLMs use a similar mechanism
https://arxiv.org/abs/2505.22200v1
to represent associations between image tokens and text tokens. We study the most commonly used VLM architecture that consists of a visual encoder, a multi- modal projector and a language model. VLMs and LLMs have some key differences that neces- sitate careful experimentation. (i) Text tokens have fixed embeddings, while concepts in an image (objects, colors, textures, etc.) do not have fixed embeddings; they are repre- sented in the patch tokens obtained from the vision encoder. (ii) Recent powerful VLMs like LLaV A-OneVision [10], Molmo [3], and Qwen2-VL [17] utilize an image encoder that converts the input image into a set of multiscale, multi- crop images and independently maps each of these images into a set of vision tokens. This leads to multiple sets of tokens for the same visual concept. We adapt the causal mediation based experiments from [4] to account for these differences and make the following observations: (i) Image tokens corresponding to the location of the visual concept represent information related to that concept. This is appli- cable even when there are multiple tokens corresponding to multiple crops from the same image. (ii) VLMs implement the binding ID mechanism. There are binding ID vectors that associate the image tokens corresponding to a visual object and its references in the text tokens. 1 2. Task Definition and Notations Shapes task. This task consists of images with two 3D objects ( O0,O1) with distinct shapes and colors. The context refers to both objects using their color ( C0,C1) and assigns a unique item ( I0,I1). We use the nota- tionc=ctxt(O0↔C0↔I0, O1↔C1↔I1)to denote a context where object O0of the color C0contains item I0 and object O1of the color C1contains item I1. In Fig. 1, O0 andO1correspond to the green sphere andred cube patches in the image, C0andC1correspond to green andredin the text and I0andI1correspond to item P anditem I in text respectively. The question refers to one of the objects using its shape and queries the item assigned to it. Note that ‘ item P/I’ are randomly chosen uppercase English letters with no inherent meaning. We generate the images using Blender [2]. We consider four choices for the shape (cone, cube, cylinder and sphere) and six choices for the color (red, blue, green, yellow, cyan and purple). The objects occupy a fixed number of patches and are located in fixed positions. Notation. LetΦv(·)denote the vision encoder and g(·)de- note the multi-modal projector. For an image Xv, the patch embeddings are tv=g(Φv(Xv)). Now, let tcdenote the prompt tokens, comprising image tokens tvand text tokens up to the context’s end, just before the question. Let the LLM have Ltransformer layers and D-dimensional activa- tion space. For every token position p,Zp∈RL×Dis the stacked set of residual stream activations. The activations at the object, color, and item positions are denoted as ZOk, ZCkandZIkrespectively where k∈ {0,1}. 3. Do Binding IDs Occur in VLMs? Binding Id Mechanism. Feng and Steinhardt [4] suggest that LLMs associate concepts through binding ID vectors in their activations. Specifically, the activations of an LLM can be decomposed into vectors that encode the
https://arxiv.org/abs/2505.22200v1
concept and those that encode the binding information. Each binding ID consists of similar vector pairs in a subspace, with asso- ciated concepts sharing one vector from the same ID. Ex- tending this, we describe our hypothesis for the existence of binding IDs in VLMs using the Shapes task below: • Consider 3D objects as visual entities and their colors and items mentioned in the text as their attributes. For the kthvisual entity-attributes tuple (Ik, Ck, Ok), the model represents binding vectors in its activations in an abstract form, independent of any particular object, color, or item. • For object patch tokens, the activations ZOkcan be de- composed as ZOk=fO(Ok) +bO(k). Similarly, ZCk= fC(Ck)+bC(k)andZIk=fI(Ik)+bI(k). Here fO(Ok), fC(Ck),fI(Ik)are the content vectors and the set of binding vectors (bO(k), bC(k), bI(k))form the binding Model OutputStep 2: Cache activations from sample 2 Step 3: Replace any activation in sample 1 with the corresponding activation of sample 2 and evaluate the modelStep 1: Cache activations from sample 1 yellow object purple object item B item Rgreen object red object item P item IFigure 2. Causal intervention. In steps 1 and 2, activations from the first and second samples are saved. In step 3, object/color/item activations in the first sample are replaced with those from the sec- ond. This new activation stack is frozen, and the model is queried with all four objects to observe the change in predictions. ID for the kthtuple. • To answer the question about an object, the model selects the item that shares the same binding ID. Note that, since binding IDs are independent of the en- tity/attribute, we can manipulate the associations built by the model by exchanging the binding IDs in the activations asˆZOk:=ZOk−bO(k) +bO(k′)where k̸=k′. In the following sections, we assert the existence of the binding ID mechanism by establishing two of its properties: Fac- torizability (Sec. 3.1) and Position independence (Sec. 3.2). Then, we exchange the associations built by the model us- ing Mean interventions (Sec. 3.3). 3.1. Factorizability Fig. 2 shows two samples from the Shapes task with the contexts c=ctxt(O0↔C0↔I0, O1↔C1↔I1)and c′=ctxt(O′ 0↔C′ 0↔I′ 0, O′ 1↔C′ 1↔I′ 1). The Binding ID mechanism assumes that the informa- tion linking a concept to its attributes is stored locally within the activations at its token positions and is independent of the specific concept itself. This implies that the activations of the sphere (O0) in the first sample and the cone (O′ 0) in the second sample should contain the same binding vec- torbO(0)as they both correspond to the 0thvisual entity- attributes tuple in their respective samples. Replacing ZO0 withZO′ 0should now bind the cone with the text tokens green object anditem P . We demonstrate this using causal interventions [16] on the activations as described below. • Cache all activations Zcfrom the model run on c. • Cache activations ZO′ 0andZO′ 1from the model run on c′. • Construct a new stack of activations Z∗ cby modifying Zc 2 I0I1I/prime0I/prime1 ItemsO0O1O/prime0O/prime1Objects-9.29 -13.59 -15.29 -15.31 -14.24 -9.70 -17.79 -17.78 -12.12 -11.61 -13.98 -14.07 -12.03 -11.60 -13.93 -14.10None I0I1I/prime0I/prime1 Items-10.99
https://arxiv.org/abs/2505.22200v1
-11.27 -14.02 -14.04 -14.17 -9.75 -17.68 -17.66 -9.41 -13.23 -14.96 -15.09 -12.02 -11.62 -13.93 -14.10Item 0 I0I1I/prime0I/prime1 Items-9.30 -13.44 -15.27 -15.28 -12.51 -10.83 -15.32 -15.26 -11.98 -11.61 -13.89 -14.00 -13.75 -9.66 -16.87 -16.93Item 1 I0I1I/prime0I/prime1 Items-11.24 -11.12 -14.04 -14.06 -12.27 -10.84 -15.04 -14.98 -9.39 -13.25 -14.95 -15.09 -13.75 -9.61 -16.84 -16.90Items 0,1(a) Object activation replacements I0I1I/prime0I/prime1 ItemsO0O1O/prime0O/prime1Objects-9.29 -13.59 -15.29 -15.31 -14.24 -9.70 -17.79 -17.78 -12.12 -11.61 -13.98 -14.07 -12.03 -11.60 -13.93 -14.10None I0I1I/prime0I/prime1 Items-12.77 -13.23 -9.30 -15.18 -15.88 -9.69 -15.47 -17.75 -12.84 -11.55 -12.23 -13.87 -12.70 -11.51 -12.20 -13.87Item 0 I0I1I/prime0I/prime1 Items-9.22 -15.02 -14.75 -13.34 -14.41 -17.06 -15.89 -9.51 -12.07 -13.88 -13.24 -11.56 -12.05 -13.91 -13.27 -11.60Item 1 I0I1I/prime0I/prime1 Items-13.28 -15.11 -9.26 -13.53 -16.80 -17.25 -14.15 -9.57 -13.31 -13.92 -12.13 -11.53 -13.35 -13.93 -12.15 -11.58Items 0,1 (b) Item activation replacements I0I1I/prime0I/prime1 ItemsO0O1O/prime0O/prime1Objects-9.29 -13.59 -15.29 -15.31 -14.24 -9.70 -17.79 -17.78 -12.12 -11.61 -13.98 -14.07 -12.03 -11.60 -13.93 -14.10None I0I1I/prime0I/prime1 Items-9.28 -13.59 -15.18 -15.22 -14.30 -9.71 -17.84 -17.83 -12.12 -11.59 -13.98 -14.08 -12.04 -11.59 -13.93 -14.11Item 0 I0I1I/prime0I/prime1 Items-9.29 -13.67 -15.34 -15.35 -14.31 -9.73 -17.63 -17.61 -12.11 -11.67 -13.96 -14.05 -12.02 -11.63 -13.91 -14.05Item 1 I0I1I/prime0I/prime1 Items-9.28 -13.67 -15.25 -15.27 -14.35 -9.74 -17.71 -17.69 -12.10 -11.66 -13.96 -14.05 -12.03 -11.62 -13.91 -14.06Items 0,1 (c) Color activation replacements Figure 3. Factorizability results. Each row shows the model’s mean log probabilities of an item contained in an object. The first grid in each case shows results with unaltered activations. Squares highlighted in red denote the expected predictions based on our hypothesis. Model outputs match hypothesis suggesting a multi- modal binding ID mechanism. such that ZOkis replaced with ZO′ kfor any k∈ {0,1}. • Re-evaluate the model by probing what item each shape (O0, O1, O′ 0, O′ 1) contains by freezing the activation cache as Z∗ c. We expect the model to now associate O′ k withIksince both ZOkandZO′ kcontain the same binding ID vector bO(k). Results. Fig. 3 shows the mean log probability of choosing an item before and after interventions. We show the fac- torizability results for object patch tokens, color tokens and item tokens. In Fig. 3a, the first grid shows the results when theactivations are unaltered . As expected, for objects O0 andO1, items I0andI1are chosen at a higher rate, respec- tively and for objects O′ 0andO′ 1, items I0andI1are chosen at a roughly equal rate since these objects do not exist in the image. In the second grid, we replace ZO0withZO′ 0. Now, when the model is queried for the item contained by O′ 0, the model picks item I0overI1. The third grid follows the same pattern, ZO1is replaced by ZO′ 1resulting in O′ 1con- taining I1. Finally, both object activations are replaced in thefourth grid and we observe that the model chooses I0/1 forO′ 0/1respectively. Note that when the object patches are replaced, the color of the new object no longer matches the color description in the text. Nevertheless, the new object is still associated with the same item as the original object, as they both contain the same binding vector. We observe a similar behavior for replacing items in
https://arxiv.org/abs/2505.22200v1
Context: The green object contains item P . The red object contains item I .Context: The green object contains item P . The red object contains item I .Figure 4. Mean intervention samples. Fig. 3b. When ZIkis replaced by ZI′ k, the model prefers itemI′ kfor object Ok. However, when we intervene on the color activations ZCk, the results are similar to when there are no interventions (Fig. 3c). This is expected since both ZCkandZC′ kcontain the same binding ID vectors. 3.2. Position Independence Next, we hypothesize that the associations formed by the model are invariant to the activation positions of the object, color, or item, as they rely solely on the binding IDs. This implies that swapping the positions of ZO0andZO1should not change items associated with the objects. To validate this, we first obtain the activations of the context tokens Zc (Sec. 3.1). Then, we compute a new stack of activations Z∗ c wherein the positions of ZO0andZO1are altered, following the procedure described in [4], adapted for models that use Rotary Position Embedding (RoPE) [14]. Unlike absolute position embeddings, RoPE incorporate positional informa- tion only through the attention score computations, without injecting it directly into the residual stream activations. Results. Fig. 5 shows the mean log probabilities when the positions of ZO0andZO1are progressively adjusted to get closer and ultimately swapped. We observe that the model answers with the correct item regardless of positions. 3.3. Mean Interventions The factorizability and position independence results show that binding vectors are contained within the activations cor- responding to the object, color, and item tokens and cause the model to form associations across image and text. If binding vectors were directly accessible, we could inter- change them to observe if the model changes its answer. While this is not feasible, we can approximate the differ- ence in binding vectors from the difference in activations. To estimate ∆O=bO(1)−bO(0), we consider two in- stances of the Shapes task as shown in Fig. 4. Let O0,O1 denote the objects in the first instance and O′ 0,O′ 1denote the objects in the second instance. Notice that both O0andO′ 1 are the same object, a green sphere . However, we expect their activations to contain different binding IDs. We can now estimate ∆Oas the difference ZO′ 1−ZO0. Concretely, 3 ctrl+(-30) ctrl+(0) ctrl+(30) ctrl+(60) ctrl+(90) ctrl+(95) ctrl+(98) ctrl+(100) ctrl+(120) T oken Position14 13 12 11 10 Mean Log ProbObjects O0 O1 Items I0 I1(a) Object ctrl+(-3) ctrl+(0) ctrl+(3) ctrl+(6) ctrl+(7) ctrl+(9) T oken Position14 13 12 11 10 9 Mean Log ProbObjects O0 O1 Items I0 I1 (b) Item ctrl+(-3) ctrl+(0) ctrl+(3) ctrl+(6) ctrl+(7) ctrl+(9) T oken Position14 13 12 11 10 Mean Log ProbObjects O0 O1 Items I0 I1 (c) Color Figure 5. Position independence results. The integers in the x-axis show how much the position of the first and second objects/items/colors are incremented and decremented respectively. The green line corresponds to no change in positions and the gray line corresponds to swapped positions. In all cases Ok↔Ik(blue solid O0,I0and oranged dashed O1,I1) have a higher probability than Ok↔I′ k. ConditionMean
https://arxiv.org/abs/2505.22200v1
vectors Random vectors O0↔I0O1↔I1O0↔I0O1↔I1 None 1.00 1.00 - - O 0.00 0.05 1.00 1.00 I 0.05 0.00 1.00 1.00 C 1.00 1.00 1.00 1.00 O, I 1.00 0.95 1.00 1.00 O, I, C 1.00 0.95 1.00 1.00 Table 1. Mean ablation accuracies: Object (O), Item (I), Color (C). we compute ∆Oas the mean of the difference of activations over multiple pairs of instances ( ∆O≈mean O0,O′ 1[ZO′ 1− ZO0]). Similarly, we compute ∆C=bC(1)−bC(0)and ∆I=bI(1)−bI(0)from the color and item activations. Using these mean vectors ( ∆O,∆C,∆I), we can now edit the binding vectors in the activations to alter the model response. For any new instance with the context c∗= ctxt(O∗ 0↔C∗ 0↔I∗ 0, O∗ 1↔C∗ 1↔I∗ 1), we can alter the binding vector of the objects as ZO∗ 0:=ZO∗ 0+ ∆Oand ZO∗ 1:=ZO∗ 1−∆O. This should result in a swap of object- item binding with O∗ 0andO∗ 1being bound to I∗ 1andI∗ 0 respectively. Similarly, altering the binding vector of the items as ZI∗ 0:=ZI∗ 0+∆IandZI∗ 1:=ZI∗ 1−∆Ishould also exchange the model response. Altering the binding vectors in color token activations will make the model now asso- ciateOk,Ck′andIkwhere k̸=k′. However, Okis still bound to Ik, and we expect no change in response. Results. Tab. 1 shows the accuracy, measured as the frac- tion of samples where the correct item has the highest log probability among the possible items in the context. As expected, both object and item interventions individually change the model’s response, while color interventions do not. Further, simultaneously performing object and item in- terventions restores the model’s original response since they now have the same binding IDs. We also repeat these ex- periments with random vectors that have the same magni- tude but different directions. These vectors do not alter the model response, indicating that the specific directions of the mean vectors causally affect the binding. 3.4. Experimental Details Throughout the paper, we report results with LLaV A- OneVision-7B [10], which uses the SigLIP [18] vision en-coder and encodes multiple crops from a single image. The Shapes task images are of size 384 ×384, with each object appearing in two crops and occupying 5 ×5 patch tokens. Empirically, we found that when intervening on object to- ken activations, a 3-token padding on all sides in both crops yields optimal results. To estimate the difference of binding vectors, we use a separate set that contains different shapes (frustum, pyramid, prism and toroid), colors (lime, pink, gold, brown, orange and azure) and items (lowercase En- glish alphabet). All colors and items span two text tokens. 4. Related Work The Binding ID mechanism explains how LLMs associate concepts in context, leading to the identification of a bind- ing subspace where bound tokens have a higher similarity than unbound ones [5]. Concurrently, researchers uncov- ered circuits for entity tracking in LLMs, allowing infer- ence of entity properties from context [12]. The Shapes task is inspired by the text-based entity tracking task [9], which requires predicting an entity’s state based on its initial de- scription and applied operations. Prior works have analyzed attention heads in VLMs
https://arxiv.org/abs/2505.22200v1
to understand visual processing [8], shown that object infor- mation is localized to corresponding image token positions [11], and developed methods to manipulate image token representations to mitigate hallucinations [7]. Our work complements these efforts by examining the association be- tween image and text representations. Benchmarks like VTQA [1] and MuMuQA [13] pose multi-hop questions that require synthesis of visual and tex- tual information, going beyond traditional VQA where an- swers rely primarily on visual inputs. They present an op- portunity to explore how mechanisms such as Binding IDs could enhance reasoning in complex, realistic scenarios. 5. Conclusion In this work, we explore how in-context associations oc- cur in VLMs. We formulate the Shapes task, a simple and controlled QA task which requires the model to associate 3D objects in an image with their references in the text. Through experiments, we demonstrate that VLMs utilize binding ID vectors to bind concepts across image and text. 4 References [1] Kang Chen and Xiangqian Wu. VTQA: Visual Text Question Answering via Entity Alignment and Cross-Media Reason- ing. In Conference on Computer Vision and Pattern Recog- nition (CVPR) , 2024. 4 [2] Blender Online Community. Blender - A 3D Modelling and Rendering Package . Blender Foundation, Stichting Blender Foundation, Amsterdam, 2018. 2 [3] Matt Deitke et al. Molmo and PixMo: Open Weights and Open Data for State-of-the-Art Vision-Language Models. arXiv preprint arXiv:2409.17146 , 2024. 1 [4] Jiahai Feng and Jacob Steinhardt. How do Language Models Bind Entities in Context? In International Conference on Learning Representations (ICLR) , 2024. 1, 2, 3 [5] Jiahai Feng, Stuart Russell, and Jacob Steinhardt. Monitor- ing Latent World States in Language Models with Proposi- tional Probes. In International Conference on Learning Rep- resentations (ICLR) , 2025. 4 [6] Aaron Hurst et al. GPT-4o system card. arXiv preprint arXiv:2410.21276 , 2024. 1 [7] Nicholas Jiang, Anish Kachinthaya, Suzanne Petryk, and Yossi Gandelsman. Interpreting and Editing Vision- Language Representations to Mitigate Hallucinations. In In- ternational Conference on Learning Representations (ICLR) , 2025. 4 [8] Omri Kaduri, Shai Bagon, and Tali Dekel. What’s in the Image? A Deep-Dive into the Vision of Vision Language Models. arXiv preprint arXiv:2411.17491 , 2024. 4 [9] Najoung Kim and Sebastian Schuster. Entity Tracking in Language Models. In Association of Computational Linguis- tics (ACL) , 2023. 4 [10] Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Yanwei Li, Ziwei Liu, andChunyuan Li. LLaV A-OneVision: Easy Visual Task Trans- fer.arXiv preprint arXiv:2408.03326 , 2024. 1, 4 [11] Clement Neo, Luke Ong, Philip Torr, Mor Geva, David Krueger, and Fazl Barez. Towards Interpreting Visual In- formation Processing in Vision-Language Models. In Inter- national Conference on Learning Representations (ICLR) , 2025. 4 [12] Nikhil Prakash, Tamar Rott Shaham, Tal Haklay, Yonatan Belinkov, and David Bau. Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity Tracking. In Inter- national Conference on Learning Representations (ICLR) , 2024. 4 [13] Revant Gangi Reddy et al. MuMuQA: Multimedia Multi- Hop News Question Answering via Cross-Media Knowledge Extraction and Grounding. In Association for the Advance- ment of Artificial Intelligence (AAAI)
https://arxiv.org/abs/2505.22200v1
, 2022. 4 [14] Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, and Yunfeng Liu. RoFormer: Enhanced Trans- former with Rotary Position Embedding. arXiv preprint arXiv:2104.09864 , 2021. 3 [15] Gemini Team. Gemini 1.5: Unlocking multimodal under- standing across millions of tokens of context. arXiv preprint arXiv:2403.05530 , 2024. 1 [16] Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. In- vestigating Gender Bias in Language Models Using Causal Mediation Analysis. In Advances in Neural Information Pro- cessing Systems (NeurIPS) , 2020. 2 [17] Peng Wang et al. Qwen2-VL: Enhancing Vision-Language Model’s Perception of the World at Any Resolution. arXiv preprint arXiv:2409.12191 , 2024. 1 [18] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid Loss for Language Image Pre- Training. In International Conference on Computer Vision (ICCV) , 2023. 4 5
https://arxiv.org/abs/2505.22200v1
arXiv:2505.22202v1 [cs.CL] 28 May 2025Let’s Predict Sentence by Sentence Hyeonbin Hwang1∗Byeongguk Jeon1∗ Seungone Kim2Jiyeon Kim1Hoyeon Chang1Sohee Yang3 Seungpil Won4Dohaeng Lee4Youbin Ahn4Minjoon Seo1 1KAIST2Carnegie Mellon University3University College London4LG AI Research {hbin0701, byeongguk, minjoon}@kaist.ac.kr Abstract Autoregressive Language Models (LMs) generate one token at a time, yet human reasoning operates over higher-level abstractions—sentences, propositions, and concepts. This contrast raises a central question: can LMs likewise learn to reason over structured semantic units rather than raw token sequences? In this work, we investigate whether pretrained LMs can be lifted into such abstract reasoning spaces building on their learned representations. We present a framework that adapts a pre- trained token-level LM to operate in sentence space , by autoregressively predicting continuous embeddings of next sentences. We explore two embedding paradigms inspired by classical representation learning: semantic embeddings, learned via autoencoding to preserve surface meaning; and (ii) contextual embeddings, trained via next-sentence prediction to encode anticipatory structure. We evaluate both under two inference regimes: DISCRETIZED , which decodes each predicted em- bedding into text before re-encoding; and CONTINUOUS , which reasons entirely in embedding space for improved efficiency. Across four domains—mathematics, logic, commonsense, and planning—contextual embeddings under continuous inference show competitive performance with Chain-of-Thought (CoT) while re- ducing inference-time FLOPs in average by half. We also present early signs of scalability and modular adaptation. Finally, to visualize latent trajectories, we introduce SentenceLens , a diagnostic tool that decodes intermediate model states into interpretable sentences. Together, our results indicate that pretrained LMs can effectively transition to abstract, structured reasoning within latent embedding spaces.∗ 1 Introduction Autoregressive Language Models (LMs) have achieved remarkable success on complex reasoning tasks through a simple objective: Next-Token Prediction [ 1]. This success is further amplified by Chain-of-Thought (CoT), which generates explicit intermediate reasoning steps to guide the model [ 2]. Recent advancements demonstrate substantial gains in performance by scaling inference- time computation even further [ 3,4]. However, next-token prediction requires generating long reasoning chains one token at a time, making it computationally inefficient. Also, it remains unanswered whether reasoning at such granularity is genuinely optimal. ∗Equal contribution ∗Our code is available here. Preprint. Under review. st L a t en t Mode lT r ai ni ngInf er ence(a) Discr e tized (b) Con ti nuousDecoderDecoder Ques tion hh1hh2hh3hhh1hh2hh3hhh n+1hht ...hhh1hhh L a t en t Mode l Ques tion hh1......=hn+1hn+1hn+1Dec.Enc.t e xtHow t o map ou tpu t t o ne x t i npu t?CE L ossInf oNCE L oss+ Figure 1: Sentence-level reasoning framework. Training : the latent model reads the question tokens and previous embeddings, predicts ˆht, and a frozen decoder reconstructs st;Inference : embedding can be rolled forward by (a) Discretized : decode →text→encode or (b) Continuous : pass-through. While token-level generation has driven recent progress, human cognition typically operates over higher-level abstractions—such as concepts, propositions, or full sentences [ 5,6,7]. Prior works suggest that language models may similarly benefit from operating at these higher levels, potentially enabling more structured and computationally efficient reasoning [8, 9]. In this paper, we investigate whether pretrained language models can effectively build higher-level
https://arxiv.org/abs/2505.22202v1
representations directly by abstracting over their existing token-level representations, without the prohibitive cost of pre-training from scratch. Specifically, we introduce a framework that repurposes pretrained next-token Transformers to reason in a latent sentence-level embedding space. Instead of producing outputs token-by-token, our approach predicts continuous embeddings for entire sentences, which can be decoded back into natural language yet primarily function as abstract conceptual representations. To systematically explore viable latent representations, we draw inspiration from the well-established dichotomy in classical representation learning between reconstruction-based and prediction-based methods [ 10,11,12]. We define two embedding paradigms: (1) Semantic embeddings, which prioritize preserving textual fidelity through autoencoding, and (2) Contextual embeddings, which focus on capturing predictive context via next-sentence prediction. We evaluate models trained with these embeddings under two inference regimes: DISCRETIZED , which decodes each predicted embedding into natural language before re-encoding it as the next input, and CONTINUOUS , which performs reasoning entirely within the continuous embedding space. Our empirical findings demonstrate that contextual embeddings consistently outperform semantic embeddings across diverse reasoning domains including mathematics, logic, commonsense, and planning tasks. Notably, contextual embeddings using Continuous inference show competitive performance to token level Chain of Thought reasoning while reducing inference time computational cost by half in average. Finally, we introduce SentenceLens , a diagnostic tool that translates intermediate hidden states into readable sentences, thus providing intuitive transparency into the model’s internal reasoning trajectories. Overall, our analysis provides initial evidence that pretrained inductive biases acquired from token level modeling can be effectively adapted to structured, abstraction level reasoning within latent embedding spaces. 2 Sentence embeddings for autoregressive modeling Unsupervised and semi-supervised sequence representation learning has predominantly evolved along two primary paradigms: reconstruction-based andprediction-based methods [ 10,11,12]. Both methodologies have demonstrated strong empirical performance, yet each emphasizes distinct 2 (a) Recons truc tion(b) Pr edic tionQuestion + s + ... + si-1ssississi 1L anguage Mode lL anguage Mode l. . .. . .si. . .hsemhsemhsemhctxsemhctxhsemhsemL anguage Mode lL anguage Mode l ( semantic embeddings )( contextual embeddings )Figure 2: Illustration of the different types of sentence embeddings used in our framework. representational strengths. Reconstruction-based methods, typically employing autoencoder archi- tectures, excel at semantic fidelity by explicitly encoding and reconstructing input sequences [ 10], whereas prediction-based methods prioritize capturing contextual semantics by modeling relations to subsequent sequences [11]. Previous research suggests that the optimal embedding strategy varies significantly depending on the target application [ 13]. In this light, we systematically explore both embedding paradigms within the context of sentence-level autoregressive modeling. Specifically, we adapt an autoregressive Lan- guage Model autoencoder framework to construct and evaluate two distinct embedding approaches: semantic embedding , derived through reconstruction objective, and contextual embedding , derived through predictive objective. 2.1 Sentence embedding construction To ensure scalability and avoid vocabulary constraints inherent to discrete codebooks [ 14], we utilize a continuous embedding space. This approach facilitates flexible representational capacity scaling with embedding dimensionality [ 15]. We build upon the autoencoding framework proposed by ICAE [ 16] and adapt a decoder-only Transformer (e.g., GPT-2), employing shared parameters for encoding and decoding: θENC=θDEC. Given an input sequence
https://arxiv.org/abs/2505.22202v1
x= (x1, . . . , x N), the encoder produces a sequence of hidden states H= (h1, . . . , h N). We then define the embedding h[−1]:=hNas the latent representation of the entire input sequence. This embedding conditions the decoder, trained autoregressively with cross-entropy loss: ˆy=θDEC(h[−1])andLCE=−NX t=1logp(yt|y<t, h[−1]) Note that most reasoning tasks consist of a question or instruction q, followed by an ordered sequence of reasoning steps (s1, . . . , s n). In this light, we construct training examples tailored to each embedding type as follows (See Figure 2): Semantic embeddings. Each reasoning step siindependently forms the input and reconstruction target x=y=si. Training this way ensures the embedding h[−1]encapsulates complete and detailed semantics of the individual reasoning step. Contextual embeddings. We form context–target pairs, where context xincludes the question and preceding reasoning steps (q, s1, . . . , s i−1), and the target is the current step y=si. Thus, embeddings must capture predictive cues essential for reasoning step generation. Optionally, to bridge semantic fidelity with predictive abstraction, we also try a contrastive regulariza- tion loss (InfoNCE), aligning contextual embeddings closer to corresponding semantic embeddings: LInfoNCE =−logexp (sim(ˆ zi, zsem i)/τ)P jexp sim(ˆzi, zsem j)/τ, 3 RECONSTRUCTION PREDICTION DATASET SEMANTIC (EM) CTX-B CTX-C C OT GSM8K 98.5 42.0 42.1 43 .4 CSQA 98.5 33.8 35.1 35 .7 PROSQA 100.0 80.2 75 .3 77 .5 BLOCKSWORLD 100.0 89.9 90.1 84 .3 Table 1: Performance of Semantic and Contextual Embeddings across datasets. For Semantic embeddings, we report exact match (EM). For Contextual embeddings, we compare final-answer accuracy (ACC) under different decoding schemes: CTX-B (unregularized), CTX-C (contrastive), and CoT (language-level chain-of-thought). where ˆziis a contextual embedding and zsem ia semantic embedding. Negative examples zsem jare sampled within the batch. We refer to this regularized approach as Contextual-Contrastive (CTX-C) and the unregularized baseline as Contextual-Base (CTX-B) . 2.2 Embedding evaluation Setting We evaluate our framework using GPT-2 across four distinct reasoning domains: mathemat- ical reasoning (GSM8K [ 17]), commonsense reasoning (CommonsenseQA [ 18]), logical reasoning (ProsQA [ 19]), and planning (Blocksworld). For each domain, we train on the respective training split and report accuracy on the corresponding test set, analyzing how well our framework generalizes across diverse linguistic subspaces. ( i.e.,mathematical expressions, natural language, etc.)∗See Appendix B and E for more details. To evaluate semantic embedding’s performance, we compute exact match (EM) between the original reasoning step siand the decoder output, assessing how faithfully the model reconstructs unseen steps. For contextual evaluation, as there could be multiple correct next steps that could lead to the correct answer, we roll out the model autoregressively: at each step, the generated output yis appended to the current input x, continuing until a terminal answer is produced. The final answer is then compared against the ground-truth answer. Results are reported in Table 1. Results Across all domains, we observe that the autoencoder successfully restores the original sentences with high fidelity. This aligns with findings from Kuratov et al. [15], who show—both theoretically and empirically—that language models can compress a substantial number of tokens into
https://arxiv.org/abs/2505.22202v1
compact representations. Yet, as we form CommonsenseQA (CSQA) task’s SEMANTIC embedding using a subset of Fineweb-Edu corpus ( ∼100k documents), we highlight that larger language space (compared to synthetic, constrained, i.e. ProsQA and Blocksworld) involves a higher difficulty. In the Contextual configuration, model performance approaches that of the COTbaseline on three out of four benchmarks, and notably surpasses it on BLOCKSWORLD across both contextual variants. Introducing the contrastive alignment term ( CTX-C ) leads to a nuanced pattern: scores remain largely unchanged on GSM8K and BLOCKSWORLD , improve modestly on CommonsenseQA, but decline on ProsQA. These trends appear closely tied to each task’s underlying semantic structure. CommonsenseQA questions exhibit substantial lexical variety, so anchoring each latent vector to its semantic counterpart helps tame surface variability. In contrast, ProsQA benefits from simultaneously tracking multiple evolving states; consequently, enforcing a single semantic target at each step restricts its representational flexibility, which is consistent with earlier findings [ 19,21]. GSM8K and BLOCKSWORLD are highly symbolic and lexically sparse—thus, the baseline contextual embedding already forms an unambiguous mapping, leaving little space for improvement through additional regularization. ∗For CSQA restoration , we trained on a small subset of FineWeb-Edu [ 20] due to small CSQA training set. 4 3 Sentence-Level Reasoning Model Given the strong reconstruction and predictive capabilities of semantic and contextual embeddings, we now present a framework that leverages these embeddings for sentence-level reasoning. (Figure 1) 3.1 Architecture We adapt a pretrained decoder-only Transformer [ 22] to operate directly over continuous sentence embeddings instead of discrete natural language tokens. We refer to this model as the Latent Model θLAT. Formally, given a natural language question qand a sequence of latent embeddings corresponding to previously generated sentences h1, . . . , h t, the latent model predicts the embedding for the next sentence: ˆht+1=θLAT(q, h≤t). At inference time, predicted embeddings ˆht+1are mapped to the next input embedding ht+1using a mapping function M:Rd→Rd, where ddenotes the embedding dimensionality: ht+1=M(ˆht+1). This process continues autoregressively, forming a latent embedding trajectory that encodes the progression of reasoning steps. At each step, a sentence decoder θDEC:Rd→ T can decode latent embeddings back into natural language text. However, decoding intermediate reasoning steps is optional; embeddings can remain in their latent form to enhance computational efficiency, particularly when only the final answer is required. To this end, a lightweight termination classifier can evaluate each predicted embedding ˆhtto determine when reasoning should conclude.∗ 3.2 Training A natural approach for this task is to train the transformer model to generate sentence embeddings by minimizing the Mean Squared Error (MSE) between predicted and target embeddings. However, a single context often allows for several valid yet distinctly different continuations. [ 8]. Under these conditions, MSE tends to blend these varied possibilities into a single averaged representation, thus blurring meaningful variation. To address this, we employ a cross-entropy (CE) loss calculated over natural language targets generated by a frozen decoder. This encourages predicted embeddings to align with the manifold defined by such decoder: LCE=−n−1X t=1logp st+1|θDEC(ˆht+1) . During training, the latent model conditions on the question qand
https://arxiv.org/abs/2505.22202v1
ground-truth sentence embed- dings hi, each computed using a fixed encoder θENC. Additionally, to enhance the alignment between predicted and teacher-forced embeddings, we incorporate an InfoNCE loss [14]: LInfoNCE =−n−1X t=1logexp sim(ˆht+1, ht+1)/τ P jexp sim(ˆht+1, hj)/τ. The overall training objective combines both terms: Loverall =LCE+λLInfoNCE . To further improve training stability, we include shallow projection layers between the encoder output and latent model input, and between the latent model output and decoder input. ∗We use an oracle termination classifier for simplicity. See Appendix D for more details. 5 3.3 Inference We explore two strategies for defining the mapping function Mduring inference. Let Lrepresent the average token length per reasoning step, and Rthe total number of steps in a reasoning trace. (1) Discretized (Language-Level) Inspired by SentenceV AE [ 23], we apply a decode-and- reencode procedure: M(ˆht) =E(D(ˆht)), where the predicted latent is first decoded into a sentence and then re-encoded into the model’s input space. We refer to this as the DISCRETIZED mode, as each step explicitly traverses the discrete natural language interface. This approach helps mitigate error compounding [ 24], but comes at a higher computational cost, with attention cost scaling as O(L2R+R2). A detailed complexity analysis can be found in Appendix C. (2) Continuous (Latent-Level) Following Coconut [ 19], we define the mapping as an identity function M=I, directly propagating the predicted latent embedding ˆhtwithout intermediate decoding. In this CONTINUOUS mode, reasoning is entirely performed within the continuous embedding space, enabling significantly more efficient inference with attention complexity reduced toO(R2). Both methods offer computational advantages over natural language CoT, which incurs O(L2R2) attention complexity even under key-value caching. However, the savings in the DISCRETIZED mode are conditional: they occur only when either (1) the encoder-decoder are not too computation-heavy, or (2) attention dominates over MLP cost—typially when the total output length LRis relatively long (e.g., Blocksworld). Otherwise, the repeated decoding and encoding introduce additional MLP overhead .∗ 3.4 Experiments Building upon prior studies [ 19,21], we select GPT-2 as our baseline model and evaluate its perfor- mance across four distinct reasoning domains detailed in Section 2. To investigate optimal embedding strategies for latent reasoning, we examine Semantic andContextual (both Ctx-B andCtx-C ) embeddings from Section 2. We further explore a hybrid architecture— Sem (input) →Ctx (out- put)—which mirrors the natural separation of representational roles found in conventional language modeling. For evaluation, we compare sentence-level reasoning models against three baseline models. First, CoT represents a fully supervised model trained with access to both intermediate reasoning steps and final answers. Second, No-CoT omits step-level supervision and is trained solely to predict final answers. Third, we include Coconut [19], which gradually forgoes explicit token-level targets with curriculum-based substitution of fixed number last hidden states. 3.5 Results Again, our main objective is to examine whether a latent sentence-level reasoning framework can effectively generalize to higher-level abstractions while preserving the learned priors of the model. Achieving comparable performance to token-level Chain-of-Thought (CoT) would provide prelimi- nary evidence toward this goal. To this end, we address the following three research questions. Q1: Can sentence-level reasoning
https://arxiv.org/abs/2505.22202v1
match token-level CoT performance? We hypothesize that effective reasoning is driven more by transitions between high-level concepts than by fine-grained token-level details. Empirically, sentence-level models match or even exceed CoT performance on logical and commonsense reasoning tasks. On mathematical and planning benchmarks, performance is slightly lower, though the gap remains modest. We attribute this to the greater precision often required in these domains, where continuous latent representations may be more prone to fidelity loss. Q2: How does sentence-level reasoning differ between language-level and latent-level inference? To explore this, we compare model inference in the DISCRETIZED (language-level) space with ∗Note that using a contextual encoder incurs greater computational cost than a semantic encoder. 6 SETTING PROSQA CSQA GSM8K B LOCKSWORLD DIRECT No-CoT 76.7 23.3 18.7 36.8 LANGUAGE -LEVEL CoT 77.5 35.7 43.4 84.3 Sem. 83.6 28.5 38.9 32.9 Ctx-B. 91.4 35.2 39.0 70.0 Ctx-C. 79.8 40.3 37.1 76.3 Sem.→Ctx. 83.8 34.9 40.3 67.1 LATENT -LEVEL Coconut [19] 97.0 34.0 34.1 37.9 Sem. 86.0 27.5 29.6 30.8 Ctx-B. 92.6 37.0 37.4 70.5 Ctx-C. 81.6 35.5 38.3 80.8 Sem.→Ctx. 85.4 33.6 29.3 52.4 Table 2: Performance on PROSQA,CSQA ,GSM8K , and BLOCKSWORLD across different embed- ding paradigms. Bolded values indicate the best performance among our proposed methods within each section. Baseline results are highlighted with background colors. that in the CONTINUOUS (latent-level) space. Results reveal complementary strengths: continuous models excel on logic and planning tasks, where reasoning benefits from uninterrupted latent-space composition and abstract state transitions. Conversely, discretized models show modest advantages on commonsense and mathematical benchmarks—likely due to the grounding effect of explicit linguistic representations. Still, the observed performance gaps are narrow—3.3% on commonsense and 0.7% on math—indicating that latent inference remains a viable and compute-efficient alternative. These findings suggest that effective reasoning need not always traverse explicit language space; continuous representations alone may support structured inference. Table 3: Average inference-time compute cost (GFLOPs) for each dataset under CoT and CTX- C C ONTINUOUS Inference. DATASET COT C TX-C CSQA 25.89 9.96 PROSQA 100.99 70.19 GSM8K 21.45 12.68 BLOCKSWORLD 58.69 28.57Q3: Can sentence-level reasoning reduce computa- tional cost? Table 8 compares computational costs (FLOPs) between latent reasoning model and token- level CoT under forward-pass evaluation with key- value caching enabled. Latent reasoning employs an oracle answer classifier—executed via a single for- ward pass through the translator—that monitors the predicted embedding sequence and halts generation upon detecting a special answer token. The final la- tent embedding is decoded into natural language for evaluation. Note that we measure computational costs across the full latent pipeline, including classifier and decoder components, which remain unoptimized.∗Thus, reported efficiency gains represent conser- vative estimate. Across tasks, CONTINUOUS inference achieves 1.5–2.5 ×better efficiency compared to token-level CoT. Notably, we highlight that even DISCRETIZED inference outperform CoT in longer reasoning tasks (e.g., Blocksworld w/ average trace length R∼9.1: 52.26 GFLOPs vs. 58.69 GFLOPs). We expect this efficiency gap to grow as the length of reasoning trace increases. 4 Discussion 4.1 Potential Scalability and Modularity Scalability We report preliminary observations that suggest our framework has potential to scale to increasing model capacity. Due
https://arxiv.org/abs/2505.22202v1
to computational constraints, our experiments are limited to sub-1B ∗To see the cost with a lightweight classifier, please refer to Appendix D. 7 (a) CoT vs. CTX-B on CommonsenseQA across GPT-2 variants. (b) GPT-4o Qualitative evaluation of the reasoning steps evaluated using a similar metric employed in [25], where SFT is trained using CoT and ours is using CTX-B. models; we evaluate GPT-2 Medium (345M) and GPT-2 Large (775M) on the CommonsenseQA (CSQA) benchmark, which exhibits clear performance scaling under CoT fine-tuning. As shown in Figure 3a, the Ctx-C configuration attains performance comparable to, and in some cases exceeding, CoT—despite operating entirely in latent space and incurring lower inference-time compute. While tentative, these findings suggest that latent reasoning could offer a more compute-efficient path toward generalization. However, we acknowledge that scaling to extensively pretrained models remains as a challenge, since stable adaptation under greater distribution shifts could be more difficult [19]. Using Off-the-Shelf Encoder–Decoder We investigate whether the encoder–decoder can be decoupled from the latent model and replaced with smaller, fixed components. This modular design seeks to reduce the computational burden of D ISCRETIZED inference—especially in settings where only the latent reasoning module requires adaptation. To evaluate this hypothesis, we paired a lightweight GPT-2 Small encoder–decoder (trained on Ctx-C) with a GPT-2 Medium latent model and assessed performance on GSM8K.∗ This hybrid configuration achieved an accuracy of 42.23 , compared to 47.69 for a fully fine-tuned GPT-2 Medium with CoT training. While accuracy decreases slightly, the results demonstrate that predictive embeddings can transfer across model architectures with reasonable degradation–supporing the feasibility of modular reuse. Given prior findings on general embedding space alignment across models [ 26,27], further exploration with larger models and diverse tasks remains a promising direction. 4.2 SentenceLens: Towards Human-Readable Interpretability We introduce SentenceLens , an intrepretability tool that decodes intermediate hidden representations by directly passing them through the trained sentence-level decoder. In contrast to token-level inspection methods such as Logit Lens [ 28], SentenceLens operates at the sentence level, offering a more human-readable view of the model’s evolving internal states across reasoning steps. For example, in Table 6, we show how the model’s prediction shifts across layers during the transition from one reasoning step to the next. When making first step prediction ˆh1, Layer 19 introduces a general observation about eating and energy levels, while Layer 22 begins to center on the idea that hunger motivates goal-directed behavior. These intermediate activations reflect a gradual shift in con- ceptual focus, which in the last layer ( 36th) develops as: If you are hungry, you are likely engaging in an activity that requires sustenance. Since the latent model frames reasoning as a continuous process , we hypothesize that intermediate latent states may become naturally decodable—allowing us to observe the progression of inference across steps. To see more examples, see Appendix A. ∗GSM8K was selected based on preliminary findings that moderately sized datasets help stabilize shallow MLP mappings across heterogeneous embedding spaces. 8 Step Decoded Sentence(s) Question If you are hungry and going fishing, why would you be going fishing? A: to
https://arxiv.org/abs/2505.22202v1
see the fish B: have fun C: catching fish D: wet clothes E: killing 0→1 LAYER 19: A person who eats a lot experiences increased energy levels. LAYER 22: A person who is hungry seeks to alleviate their hunger. When you are hungry, you engage in an activity to satisfy your hunger. ··· 1 If you are hungry, you are likely engaging in an activity that requires sustenance. 1→2 LAYER 9: If a person is hungry, they are likely to engage in eating. LAYER 20: The act of catching fish involves physical activity. 2 Fishing is a common activity for those who enjoy the outdoors. 2→3 LAYER 4: Fishing is a common activity for those who enjoy catching fish. LAYER 21: The act of catching fish can lead to enjoyment and recreation. 3 Fishing is a recreational activity that people engage in for fun. 3→4 LAYER 9: The act of catching fish provides a direct source of food. LAYER 21: The act of catching fish provides a direct source of food. People fish to enjoy the experience of catching fish. 4 Fishing is a recreational activity that people often engage in. 4→5 LAYER 5: Fishing is a recreational activity that is often pursued with friends. Therefore, fishing is a good reason to go fishing. 5 ### C Table 4: Latent Sentence Transitions with SENTENCE LENS for GPT2-Large under the CTX- C, C ONTINUOUS setting. We visualize intermediate decoding across layers and reasoning steps. Highlighted rows represent the output from the final latent embedding at each step. CoT Model Reasoning Trace If you are hungry, you likely seek food to satisfy that hunger. Fishing is an activity that typically results in catching fish. Catching fish is a common reason for going fishing. Seeing the fish is a primary motivation for engaging in fishing. ### C Table 5: Natural Language CoT Trace . Output from the CoT trained model (C OT) Qualitative Analysis In addition, when decoding output embeddings at successive latent reasoning steps ( e.g., Step 1 through Step 5), we find that the resulting sentences, while readily understandable, often lack the coherence and rigor characteristic of standard CoT responses. We compare two model outputs using GPT-4o evaluation with the rubric proposed by Ye et al. [25]. This scores Relevance, Fluency, Conciseness, Soundness, and Interpretability on a 1 to 5 Likert scale. It turns out that CTX-C model mostly produces reasoning chains of moderate quality (scores > 3); However, its performance falls short compared to CoT models trained directly in natural language space (Figure 3b). The largest weakness appears in Soundness, which aligns with earlier observations that high-level concept models may exhibit reduced coherence even after extensive pretraining [ 8]. While we believe this tradeoff is a natural consequence of abstraction, bridging this gap remains an interesting direction for future research. Future Directions Another interesting direction is to self-train the model by using its own inter- mediate decoded sentences as auxiliary supervision targets. We also observe the correct answer often surfaces early in the reasoning trajectory. (see Appendix A). In this light, these
https://arxiv.org/abs/2505.22202v1
intermediate outputs could offer a novel training signal that could enhance both reasoning efficiency and stability. Furthermore, unlike prior latent reasoning approaches, our framework allows for sampling in the 9 Figure 4: Performance Change when injecting a Gaussian random noise to different modes of inferencing, for Ctx-C model in GSM8K and CSQA datasets. token-level after decoding. This opens the door to applying reinforcement learning or trajectory-level optimization over the latent reasoning chain. 4.3 Fragility of Continuous Embeddings Latent reasoning operates over high-dimensional embedding manifolds, which tend to be more sensitive to perturbations than discrete token-level autoregression [ 8,24]. To systematically assess thisfragility , we introduce synthetic noise at inference time, following team et al. [8]with a 50% probability. We evaluate robustness across three intervention points in the reasoning pipeline: (1) Language-Level (Input): noise is applied to the input embedding; (2) Language-Level (Output): noise is added to the output embedding; and (3) Latent-Level: noise is directly injected into the predicted output embedding, which is then autoregressively consumed in the next step. Empirically, we observe two key trends: (1) performance degrades more rapidly on GSM8K, where precise numerical reasoning amplifies the impact of noise; and (2) Language-Level inference ( i.e., decoding and re-encoding) consistently yields greater robustness than latent-only reasoning across both tasks. This supports the intuition that grounding in language acts as a regularizing prior, mitigating error accumulation at the cost of additional compute. These findings highlight a trade- off between efficiency and stability, motivating future work on approaches that help prevent error compounding. 5 Related Works Sentence Representations Sentence-level representation learning has historically followed two main paradigms: reconstruction andcontext prediction . Early methods, such as sequence autoen- coders [ 10] and Skip-Thought vectors [ 11], learned fixed-length sentence embeddings by recon- structing input or neighboring sentences. Subsequent research, exemplified by Quick-Thought [ 29], shifted towards contrastive prediction, focusing on distinguishing the correct sentence context from distractors. Contrastive learning builds on these paradigms by explicitly aligning semantically related sentences while distinguishing unrelated examples. Models such as Sentence-BERT [ 30] and SimCSE [ 31], inspired by SimCLR [ 32], have produced robust sentence embeddings with excellent transfer per- formance. Our framework builds upon these developments by defining semantic and contextual embeddings and employing contrastive learning to align latent input-output pairs [12]. Sentence-Level Prediction Several models move beyond token-level generation to predict entire sentences. Latent-variable approaches such as V AEs [ 33] and hierarchical decoders [ 34] generate sentences from continuous codes. LCM [ 8] autoregresses over sentence-level “concept” embeddings in a multilingual, multimodal space, while CoCoMix [ 9] injects sparse autoencoder-derived vectors into hidden states to improve interpretability and control. Our method similarly operates over latent embeddings but distinguishes itself by building upon pretrained models rather than training from scratch. This approach allows us to leverage existing language understanding capabilities while introducing latent reasoning mechanisms. 10 Latent-Space Reasoning Efficiency and abstraction have motivated reasoning directly in embed- ding space, bypassing token generation. Joint embedding architectures [ 35] and predictive coding frameworks [ 12] model representation dynamics by forecasting future embeddings. This idea has recently
https://arxiv.org/abs/2505.22202v1
been extended to language: Hao et al. [19] introduced continuous latent reasoning , where token-level embeddings are gradually replaced with continuous embeddings with the last-layer hidden states through a curriculum-based strategy from Deng et al. [21]. Further extensions include, among others, methods by Shen et al. [36] which guide latent rollouts using self-distillation; and Su et al. [37] which propose mixing discrete token embeddings from trained VQ-V AE [ 14] for inference efficiency. Our work differs from these approaches primarily in three ways. (1) We provide explicit access to intermediate latent states through decoding, offering clearer insights into the reasoning trajectory. (2) Our method uniquely supports token-level sampling during latent-level reasoning, opening exciting research avenues such as self-training and reinforcement learning. (3) Whereas previous methods require iterative sampling of latent representations during training, which involves n+1 forward passes per iteration, our approach completes this in a single forward pass, significantly improving scalability. 6 Conclusion We present a framework that elevates pretrained language models from token-level generation to sentence-level reasoning by autoregressively predicting continuous embeddings of next-step sentences. This enables reasoning over more abstract conceptual units while retaining pretrained inductive biases. Our exploration of semantic and contextual embeddings reveals that contextual embeddings show competitive performance with token-level Chain-of-Thought (CoT) across diverse reasoning tasks, while significantly reducing inference-time computational costs under Continuous inference. Additionally, we demonstrate signs of scalability, modular reuse of encoder–decoder components, and enhanced interpretability through SentenceLens, which decodes latent embeddings into human-readable sentence-level traces. These findings suggest that pretrained language models could be effectively adapted for structured reasoning in latent embedding spaces, opening new directions for efficient latent reasoning systems. 11 Limitations Need for Large-Scale Experiments We conduct a preliminary exploration of sentence-level reasoning with GPT-2 variants. To keep experiments reproducible, we start with GPT-2 Small as our base model—following recent work on latent-level reasoning [ 21,19] and then explore scalability by evaluating GPT-2 Medium, GPT-2 Large, and a lightweight hybrid that pairs a GPT-2 Medium latent core with a GPT-2 Small encoder–decoder. During our experiments, we observed that larger models become somewhat more sensitive to hyper- parameter choices which could often lead to increased performance gap between our method and CoT training. We note that this increased gap has been observed for similar preliminary researches when scaled to more competitive models ( i.e.Llama 3 [ 38]), and conjecture as one of the reasons why recent works have turned towards pretraining. We hypothesize such challenge arises from the widening gap between the token-level embedding distributions learned during pretraining and the compact, coarser-grained manifold our adapter enforces. In effect, the very inductive biases that make large models robust in token space may conflict with sentence-level abstractions. A systematic study of this tension—and the design of transfer mechanisms that preserve high-capacity knowledge while avoiding overfitting to the latent manifold—remains an important avenue for future research. Fragility of Latent Reasoning As illustrated in Figure 4, pure latent reasoning, as it is conducted entirely within a continuous embedding space, becomes notably fragile. Unlike DISCRETE -STEP inference, which introduces a discrete decoding step that inherently quantizes
https://arxiv.org/abs/2505.22202v1
minor perturbations, the continuous pathway lacks such built-in stabilization. This discrete bottleneck serves as a form of regularization, filtering out numerical noise and constraining the model’s trajectory to a finite set of linguistically meaningful sequences. However, this regularization comes at the expense of expressivity, limiting outputs to token sequences present in the vocabulary. In a fully continuous framework, the model must learn to establish implicit attractors or decision boundaries to maintain trajectories within a coherent manifold—effectively performing a form of soft discretization. These learned boundaries, being approximate, may allow small deviations to persist and amplify over successive reasoning steps, potentially leading to significant semantic errors, especially in tasks demanding precision or extended reasoning chains. This vulnerability mirrors challenges observed in continuous control systems, where minor deviations can accumulate over time, resulting in substantial performance degradation unless addressed through specialized stabilization mechanisms [ 24]. Future work could explore hybrid framework that integrate discrete bottlenecks at critical junctures within the reasoning process, aiming to combine the robustness of discretization with the flexibility of continuous representations. Training from Scratch Training a model from scratch directly in the higher abstractions i.e. sentence embeddings space appears, at first glance, to be the cleanest path toward robust high- level reasoning. Prior work argues that models initialized on discrete-token objectives must later overcome a distribution shift when asked to operate over sentence-level abstractions, and this difficulty intensifies as model size—and pretraining data size—increase [ 19,21] and therefore has leaned towards pretraining [39, 36] Yet genuine intelligence might not rely on starting from a clean slate each time the abstraction level changes. We hypothesize that a system that truly generalizes beyond human capability must be able to climb the ladder of abstraction after exposure to raw experience, flexibly re-encoding its knowledge in coarser units. At the same time, safety considerations dictate that these higher-order representations remain interpretable—anchored to a manifold we can inspect and, when necessary, constrain. Our adaptation framework takes a step in this direction: it shows that a pretrained token-level language model can be lifted, with modest additional supervision, onto an interpretable sentence-manifold without retraining everything from scratch. By demonstrating both the promise and the fragility of this approach, the present work highlights a critical research frontier: designing models that learn to abstract while preserving previously learned inductive bias. 12 Broader Impacts This work introduces a novel framework for reasoning in continuous latent space, offering both practical and societal benefits. By avoiding token-level autoregressive decoding, it reduces computa- tional overhead and may lower the environmental footprint of large-scale inference. Importantly, our method maintains interpretability by anchoring latent representations to human-readable abstractions. Nonetheless, broader risks remain. If latent reasoning frameworks are deployed without transparency mechanisms, they may obscure decision processes—especially in high-stakes domains. Additionally, latent representations could encode and propagate biases present in pretraining data. As reasoning becomes more abstracted from language, care must be taken to ensure meaningful human oversight is preserved. We encourage future work to strengthen interpretability guarantees and explore safeguards that prevent misuse or unintended consequences. Acknowledgment We thank Seonghyeon Ye, Jinho Park, Seongyun Lee,
https://arxiv.org/abs/2505.22202v1
and Jaehyeok Doo for their insightful discus- sions and valuable feedback. 13 References [1]Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. A neural probabilistic language model. J. Mach. Learn. Res. , 3(null):1137–1155, March 2003. ISSN 1532-4435. [2]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [3]Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card, 2024. [4]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [5] Jerry Fodor. The Language of Thought . Harvard University Press, 1975. [6]Hugo Mercier and Dan Sperber. Why do humans reason? arguments for an argumentative theory. Behavioral and Brain Sciences , 34(2):57–74, 2011. doi: 10.1017/S0140525X10000968. [7]Yoshua Bengio. The consciousness prior, 2019. URL https://arxiv.org/abs/1709. 08568 . [8]LCM team, Loïc Barrault, Paul-Ambroise Duquenne, Maha Elbayad, Artyom Kozhevnikov, Belen Alastruey, Pierre Andrews, Mariano Coria, Guillaume Couairon, Marta R. Costa-jussà, David Dale, Hady Elsahar, Kevin Heffernan, João Maria Janeiro, Tuan Tran, Christophe Ropers, Eduardo Sánchez, Robin San Roman, Alexandre Mourachko, Safiyyah Saleem, and Holger Schwenk. Large concept models: Language modeling in a sentence representation space, 2024. URL https://arxiv.org/abs/2412.08821 . [9]Jihoon Tack, Jack Lanchantin, Jane Yu, Andrew Cohen, Ilia Kulikov, Janice Lan, Shibo Hao, Yuandong Tian, Jason Weston, and Xian Li. Llm pretraining with continuous concepts, 2025. URL https://arxiv.org/abs/2502.08524 . [10] Andrew M. Dai and Quoc V . Le. Semi-supervised sequence learning, 2015. URL https: //arxiv.org/abs/1511.01432 . [11] Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. Skip-thought vectors, 2015. URL https://arxiv.org/abs/1506. 06726 . [12] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding, 2019. URL https://arxiv.org/abs/1807.03748 . [13] Felix Hill, Kyunghyun Cho, and Anna Korhonen. Learning distributed representations of sentences from unlabelled data, 2016. URL https://arxiv.org/abs/1602.03483 . [14] Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning, 2018. URL https://arxiv.org/abs/1711.00937 . [15] Yuri Kuratov, Mikhail Arkhipov, Aydar Bulatov, and Mikhail Burtsev. Cramming 1568 tokens into a single vector and back again: Exploring the limits of embedding space capacity, 2025. URL https://arxiv.org/abs/2502.13063 . [16] Tao Ge, Jing Hu, Lei Wang, Xun Wang, Si-Qing Chen, and Furu Wei. In-context autoencoder for context compression in a large language model, 2024. URL https://arxiv.org/abs/ 2307.06945 . [17] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. 14 [18] Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge, 2019. URL https: //arxiv.org/abs/1811.00937 . [19] Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. Training large language models
https://arxiv.org/abs/2505.22202v1
to reason in a continuous latent space, 2024. URL https://arxiv.org/abs/2412.06769 . [20] Guilherme Penedo, Hynek Kydlí ˇcek, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro V on Werra, and Thomas Wolf. The fineweb datasets: Decanting the web for the finest text data at scale, 2024. URL https://arxiv.org/abs/2406.17557 . [21] Yuntian Deng, Yejin Choi, and Stuart Shieber. From explicit cot to implicit cot: Learning to internalize cot step by step, 2024. URL https://arxiv.org/abs/2405.14838 . [22] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2023. URL https://arxiv. org/abs/1706.03762 . [23] Hongjun An, Yifan Chen, Zhe Sun, and Xuelong Li. Sentencevae: Enable next-sentence prediction for large language models with faster speed, higher accuracy and longer context, 2024. URL https://arxiv.org/abs/2408.00655 . [24] Max Simchowitz, Daniel Pfrommer, and Ali Jadbabaie. The pitfalls of imitation learning when actions are continuous, 2025. URL https://arxiv.org/abs/2503.09722 . [25] Seonghyeon Ye, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, Seungone Kim, Yongrae Jo, James Thorne, Juho Kim, and Minjoon Seo. Flask: Fine-grained language model evaluation based on alignment skill sets. arXiv preprint arXiv:2307.10928 , 2023. [26] Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. Word translation without parallel data, 2018. URL https://arxiv.org/abs/1710.04087 . [27] Rishi Jha, Collin Zhang, Vitaly Shmatikov, and John X. Morris. Harnessing the universal geometry of embeddings, 2025. URL https://arxiv.org/abs/2505.12540 . [28] nostalgebraist. interpreting gpt: the logit lens, 2020. URL https://www.lesswrong. com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens .https://www. lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens . [29] Lajanugen Logeswaran and Honglak Lee. An efficient framework for learning sentence repre- sentations, 2018. URL https://arxiv.org/abs/1803.02893 . [30] Nils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert- networks, 2019. URL https://arxiv.org/abs/1908.10084 . [31] Tianyu Gao, Xingcheng Yao, and Danqi Chen. Simcse: Simple contrastive learning of sentence embeddings, 2022. URL https://arxiv.org/abs/2104.08821 . [32] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations, 2020. URL https://arxiv.org/abs/ 2002.05709 . [33] Samuel R. Bowman, Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space, 2016. URL https://arxiv.org/ abs/1511.06349 . [34] Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. A hierarchical latent variable encoder-decoder model for generating dialogues, 2016. URL https://arxiv.org/abs/1605.06069 . [35] Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint- embedding predictive architecture, 2023. URL https://arxiv.org/abs/2301.08243 . 15 [36] Zhenyi Shen, Hanqi Yan, Linhai Zhang, Zhanghao Hu, Yali Du, and Yulan He. Codi: Com- pressing chain-of-thought into continuous space via self-distillation, 2025. URL https: //arxiv.org/abs/2502.21074 . [37] DiJia Su, Hanlin Zhu, Yingchen Xu, Jiantao Jiao, Yuandong Tian, and Qinqing Zheng. Token assorted: Mixing latent and text tokens for improved language model reasoning, 2025. URL https://arxiv.org/abs/2502.03275 . [38] Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ah- mad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark,
https://arxiv.org/abs/2505.22202v1
Arun Rao, Aston Zhang, Aurelien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Roziere, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, Danny Wyatt, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Francisco Guzmán, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Govind Thattai, Graeme Nail, Gregoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel Kloumann, Ishan Misra, Ivan Evtimov, Jack Zhang, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Va- suden Alwala, Karthik Prasad, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, Khalid El-Arini, Krithika Iyer, Kshitiz Malik, Kuenley Chiu, Kunal Bhalla, Kushal Lakhotia, Lauren Rantala-Yeary, Laurens van der Maaten, Lawrence Chen, Liang Tan, Liz Jenkins, Louis Martin, Lovish Madaan, Lubo Malo, Lukas Blecher, Lukas Landzaat, Luke de Oliveira, Madeline Muzzi, Mahesh Pasupuleti, Mannat Singh, Manohar Paluri, Marcin Kardas, Maria Tsimpoukelli, Mathew Oldham, Mathieu Rita, Maya Pavlova, Melanie Kam- badur, Mike Lewis, Min Si, Mitesh Kumar Singh, Mona Hassan, Naman Goyal, Narjes Torabi, Nikolay Bashlykov, Nikolay Bogoychev, Niladri Chatterji, Ning Zhang, Olivier Duchenne, Onur Çelebi, Patrick Alrassy, Pengchuan Zhang, Pengwei Li, Petar Vasic, Peter Weng, Prajjwal Bhargava, Pratik Dubal, Praveen Krishnan, Punit Singh Koura, Puxin Xu, Qing He, Qingxiao Dong, Ragavan Srinivasan, Raj Ganapathy, Ramon Calderer, Ricardo Silveira Cabral, Robert Stojnic, Roberta Raileanu, Rohan Maheswari, Rohit Girdhar, Rohit Patel, Romain Sauvestre, Ronnie Polidoro, Roshan Sumbaly, Ross Taylor, Ruan Silva, Rui Hou, Rui Wang, Saghar Hos- seini, Sahana Chennabasappa, Sanjay Singh, Sean Bell, Seohyun Sonia Kim, Sergey Edunov, Shaoliang Nie, Sharan Narang, Sharath Raparthy, Sheng Shen, Shengye Wan, Shruti Bhosale, Shun Zhang, Simon Vandenhende, Soumya Batra, Spencer Whitman, Sten Sootla, Stephane Collot, Suchin Gururangan, Sydney Borodinsky, Tamar Herman, Tara Fowler, Tarek Sheasha, Thomas Georgiou, Thomas Scialom, Tobias Speckbacher, Todor Mihaylov, Tong Xiao, Ujjwal Karn, Vedanuj Goswami, Vibhor Gupta, Vignesh Ramanathan, Viktor Kerkez, Vincent Gonguet, Virginie Do, Vish V ogeti, Vítor Albiero, Vladan Petrovic, Weiwei Chu, Wenhan Xiong, Wenyin Fu, Whitney Meers, Xavier Martinet, Xiaodong Wang, Xiaofang Wang, Xiaoqing Ellen Tan, Xide Xia, Xinfeng Xie, Xuchao Jia, Xuewei Wang, Yaelle Goldschlag, Yashesh Gaur, Yasmine Babaei, Yi Wen, Yiwen Song, Yuchen Zhang, Yue Li, Yuning Mao, Zacharie Delpierre Coudert, Zheng Yan, Zhengxing Chen, Zoe Papakipos, Aaditya Singh, Aayushi Srivastava, Abha Jain, Adam Kelsey, Adam Shajnfeld, Adithya Gangidi, Adolfo Victoria, Ahuva Goldstand, Ajay Menon, Ajay Sharma, Alex Boesenberg, Alexei Baevski, Allie Feinstein, Amanda Kallet, Amit Sangani, Amos Teo, Anam Yunus, Andrei Lupu, Andres Alvarado, Andrew Caples, Andrew Gu, Andrew Ho, Andrew Poulton, Andrew Ryan, Ankit Ramchandani, Annie Dong, Annie Franco, Anuj Goyal, Aparajita Saraf,
https://arxiv.org/abs/2505.22202v1
Arkabandhu Chowdhury, Ashley Gabriel, Ashwin Bharambe, Assaf Eisenman, Azadeh Yazdan, Beau James, Ben Maurer, Benjamin Leonhardi, Bernie Huang, Beth Loyd, Beto De Paola, Bhargavi Paranjape, Bing Liu, Bo Wu, Boyu Ni, Braden Hancock, Bram Wasti, Brandon Spence, Brani Stojkovic, Brian Gamido, Britt Montalvo, Carl Parker, Carly Burton, Catalina Mejia, Ce Liu, Changhan Wang, Changkyu Kim, Chao Zhou, Chester Hu, Ching-Hsiang Chu, Chris Cai, Chris Tindal, Christoph Feichtenhofer, Cynthia Gao, Damon Civin, Dana Beaty, Daniel Kreymer, Daniel Li, David Adkins, David Xu, Davide Testuggine, 16 Delia David, Devi Parikh, Diana Liskovich, Didem Foss, Dingkang Wang, Duc Le, Dustin Holland, Edward Dowling, Eissa Jamil, Elaine Montgomery, Eleonora Presani, Emily Hahn, Emily Wood, Eric-Tuan Le, Erik Brinkman, Esteban Arcaute, Evan Dunbar, Evan Smothers, Fei Sun, Felix Kreuk, Feng Tian, Filippos Kokkinos, Firat Ozgenel, Francesco Caggioni, Frank Kanayet, Frank Seide, Gabriela Medina Florez, Gabriella Schwarz, Gada Badeer, Georgia Swee, Gil Halpern, Grant Herman, Grigory Sizov, Guangyi, Zhang, Guna Lakshminarayanan, Hakan Inan, Hamid Shojanazeri, Han Zou, Hannah Wang, Hanwen Zha, Haroun Habeeb, Harrison Rudolph, Helen Suk, Henry Aspegren, Hunter Goldman, Hongyuan Zhan, Ibrahim Damlaj, Igor Molybog, Igor Tufanov, Ilias Leontiadis, Irina-Elena Veliche, Itai Gat, Jake Weissman, James Geboski, James Kohli, Janice Lam, Japhet Asher, Jean-Baptiste Gaya, Jeff Marcus, Jeff Tang, Jennifer Chan, Jenny Zhen, Jeremy Reizenstein, Jeremy Teboul, Jessica Zhong, Jian Jin, Jingyi Yang, Joe Cummings, Jon Carvill, Jon Shepard, Jonathan McPhie, Jonathan Torres, Josh Ginsburg, Junjie Wang, Kai Wu, Kam Hou U, Karan Saxena, Kartikay Khandelwal, Katayoun Zand, Kathy Matosich, Kaushik Veeraraghavan, Kelly Michelena, Keqian Li, Kiran Jagadeesh, Kun Huang, Kunal Chawla, Kyle Huang, Lailin Chen, Lakshya Garg, Lavender A, Leandro Silva, Lee Bell, Lei Zhang, Liangpeng Guo, Licheng Yu, Liron Moshkovich, Luca Wehrstedt, Madian Khabsa, Manav Avalani, Manish Bhatt, Martynas Mankus, Matan Hasson, Matthew Lennie, Matthias Reso, Maxim Groshev, Maxim Naumov, Maya Lathi, Meghan Keneally, Miao Liu, Michael L. Seltzer, Michal Valko, Michelle Restrepo, Mihir Patel, Mik Vyatskov, Mikayel Samvelyan, Mike Clark, Mike Macey, Mike Wang, Miquel Jubert Hermoso, Mo Metanat, Mohammad Rastegari, Munish Bansal, Nandhini Santhanam, Natascha Parks, Natasha White, Navyata Bawa, Nayan Singhal, Nick Egebo, Nicolas Usunier, Nikhil Mehta, Nikolay Pavlovich Laptev, Ning Dong, Norman Cheng, Oleg Chernoguz, Olivia Hart, Omkar Salpekar, Ozlem Kalinli, Parkin Kent, Parth Parekh, Paul Saab, Pavan Balaji, Pedro Rittner, Philip Bontrager, Pierre Roux, Piotr Dollar, Polina Zvyagina, Prashant Ratanchandani, Pritish Yuvraj, Qian Liang, Rachad Alao, Rachel Rodriguez, Rafi Ayub, Raghotham Murthy, Raghu Nayani, Rahul Mitra, Rangaprabhu Parthasarathy, Raymond Li, Rebekkah Hogan, Robin Battey, Rocky Wang, Russ Howes, Ruty Rinott, Sachin Mehta, Sachin Siby, Sai Jayesh Bondu, Samyak Datta, Sara Chugh, Sara Hunt, Sargun Dhillon, Sasha Sidorov, Satadru Pan, Saurabh Mahajan, Saurabh Verma, Seiji Yamamoto, Sharadh Ramaswamy, Shaun Lindsay, Shaun Lindsay, Sheng Feng, Shenghao Lin, Shengxin Cindy Zha, Shishir Patil, Shiva Shankar, Shuqiang Zhang, Shuqiang Zhang, Sinong Wang, Sneha Agarwal, Soji Sajuyigbe, Soumith Chintala, Stephanie Max, Stephen Chen, Steve Kehoe, Steve Satterfield, Sudarshan Govindaprasad, Sumit Gupta, Summer Deng, Sungmin Cho, Sunny Virk, Suraj Subramanian, Sy Choudhury, Sydney Goldman, Tal Remez, Tamar Glaser, Tamara Best, Thilo Koehler, Thomas Robinson, Tianhe Li, Tianjun Zhang, Tim Matthews, Timothy Chou, Tzook Shaked, Varun
https://arxiv.org/abs/2505.22202v1
V ontimitta, Victoria Ajayi, Victoria Montanez, Vijai Mohan, Vinay Satish Kumar, Vishal Mangla, Vlad Ionescu, Vlad Poenaru, Vlad Tiberiu Mihailescu, Vladimir Ivanov, Wei Li, Wenchen Wang, Wenwen Jiang, Wes Bouaziz, Will Con- stable, Xiaocheng Tang, Xiaojian Wu, Xiaolan Wang, Xilun Wu, Xinbo Gao, Yaniv Kleinman, Yanjun Chen, Ye Hu, Ye Jia, Ye Qi, Yenda Li, Yilin Zhang, Ying Zhang, Yossi Adi, Youngjin Nam, Yu, Wang, Yu Zhao, Yuchen Hao, Yundi Qian, Yunlu Li, Yuzi He, Zach Rait, Zachary DeVito, Zef Rosnbrick, Zhaoduo Wen, Zhenyu Yang, Zhiwei Zhao, and Zhiyu Ma. The llama 3 herd of models, 2024. URL https://arxiv.org/abs/2407.21783 . [39] Sachin Goyal, Ziwei Ji, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar, and Vaishnavh Nagarajan. Think before you speak: Training language models with pause tokens, 2024. URL https://arxiv.org/abs/2310.02226 . [40] Bernd Bohnet, Azade Nova, Aaron T Parisi, Kevin Swersky, Katayoon Goshvadi, Hanjun Dai, Dale Schuurmans, Noah Fiedel, and Hanie Sedghi. Exploring and benchmarking the planning capabilities of large language models. arXiv preprint arXiv:2406.13094 , 2024. 17 A SentenceLens Examples We include a representative SentenceLens example that highlights additional key observations. Specifically, the model often identifies the correct answer early in the latent trajectory; however, sub- sequent chain-of-thought (CoT) tokens exhibit a drift that ultimately leads to an incorrect prediction. (The correct answer is C.) This suggests room for improvement by using intermediate representations as explicit supervision targets, which could guide the construction of model centric datasets and self-training methods. Step Decoded Sentence(s) Question For many males hair is a concern as they get older, it begins to what, causing a receding hairline? A: thin out B: grow in ear C: fall out D: bulge E: composted 0→1 LAYER 19: The human body requires a certain amount of energy to maintain its functions. LAYER 20: The primary cause of aging is the loss of moisture. 1 One of the common changes in hair density over time is the decrease in hair volume. 1→2 LAYER 4: A common reason for hair loss is due to a decrease in hair density and diameter. LAYER 23: The aging process causes various health issues. 2 The hair loss is often associated with hair loss. 2→3 LAYER 3: This process is often referred to as balding. LAYER 23: The aging process leads to reduced body size. 3 A thinning hairline is commonly associated with hair loss. 3→4 LAYER 2: This process can lead to a decrease in hair density and diameter. LAYER 11: The process of getting older leads to the body becoming thinner. 4 This process is commonly referred to as fading. 4→5 LAYER 4: This process is common in older individuals who lack regular hair growth. 5 ### A (Incorrect) Table 6: Example of Latent Reasoning Trajectory inspected with SentenceLens. Although early steps’ intermediate layers demonstrate accurate associations with hair loss and balding, the final prediction selects an incorrect choice, showing a drift in reasoning at later stages. Step Decoded Sentence(s) Question Why would you take a bus to work? A: commute B: flying C: get somewhere D: travel E: go home 0→1 LAYER 19:
https://arxiv.org/abs/2505.22202v1
A person spends time traveling between different locations. LAYER 20: A person spends time commuting to work. LAYER 21: A person spends time traveling, which often involves moving from one place to another. LAYER 22: A person spends time traveling, which often involves traveling across distances. 1 People often take the bus to reach a destination. ... not shown 5 ### A (Correct) Table 7: Early Answer Emergence in Latent Reasoning. The model brings up the concept of “commuting” in the reasoning chain even before the first autoregressive step completes. This hints at potential efficiency gains by leveraging early, confident predictions as supervision signals in training. 18 B Dataset Description Mathematics We use the GSM8K dataset [ 17], which consists of grade-school math word problems originally comprising 7.8k training and 1.3k test samples. Following prior expansions [ 19,21], we adopt an extended version containing approximately 370k training examples to support large-scale latent model training. Planning Following prior work [ 40], we use the Blocksworld environment for planning evaluation, but construct the dataset generation pipeline using our own Python implementation. We evaluate the model on 7-block configurations, ensuring that the initial and goal states do not overlap across the training, validation, and test sets. We use 9.9k samples for training, and 380 samples each for testing. Logical We adopt ProsQA [ 19], a synthetic dataset grounded in first-order logic. Each instance presents multiple distractors and requires multi-hop reasoning over a structured graph. Prior work highlights that latent models capable of multi-state tracking exhibit strong performance on this task. We use a 17.8k training set and 500 samples for evaluation. Commonsense We use CommonsenseQA [ 18], a multiple-choice benchmark that lacks explicit Chain-of-Thought (CoT) supervision. To enable training with intermediate reasoning steps, we augment the data using GPT-4o to generate CoT-style rationales. Our training split includes 8.5k examples, and for evaluation, we reserve 611 samples from the validation set. Figure 6 illustrates representative examples from each dataset. 19 C Computation Complexity Analysis Attention Complexity under KV-caching LetLbe the average number of tokens per sentence, R the number of reasoning steps, and ignore the prompt length N0in leading order. (1)Chain-of-Thought (CoT). Each step emits Lnew tokens into the context. Before step t, the context length is N0+ (t−1)L, so CCoT=RX t=1LX j=1 N0+ (t−1)L+ (j−1) =O L2R2 . (2)Contextual Embedding Mode. At each step the model (i) decodes one latent into an L-token sentence and (ii) attends over all retained tokens to predict the next latent: RX t=1LX j=1j |{z} O(L2R)+RX t=1(N0+ (t−1)L) | {z } O(L R2)=O L2R+L R2 . (3)Language-Grounded Mode. Each step (i) processes only latents in the main chain O(R2) and (ii) decodes and re-encodes an L-token sentence O(L2R) , yielding CLG=O R2+L2R . (4) Pure Latent Mode. Each step adds one latent vector; attending over t−1latents gives Clatent =RX t=1(N0+t−1) =O R2 . Summary of leading-order costs: CCoT=O(L2R2),Ccontextual =O(L2R+L R2),CLG=O(L2R+R2),Clatent =O(R2). MLP Overhead In addition to attention cost, every decoded or re-encoded token incurs feed- forward (MLP) computation. More specifically: •CoT & Contextual Embedding: emits Ltokens per step →processes L×Rtokens
https://arxiv.org/abs/2505.22202v1
through MLP→O(LR). •Language-Grounded: With a semantic encoder, each step decodes and re-encodes Ltokens on compact codes—processing 2Ltokens per step for an MLP cost of O(LR). If instead a contextual encoder must re-attend over up to N0+ (t−1)Ltokens each pass, it incurs an additional O(LR2)MLP overhead, which can erode attention savings unless the encoder is shallow or non-autoregressive. •Pure Latent: processes one latent per step →O(R). Concluding Remark Under KV-caching, the Language-Grounded mode—with a semantic en- coder—adds an O(L2R)decode/re-encode overhead, but makes it ideal for tasks sensitive to error- compounding or instability ( i.e.Mathematics.) In contrast, the Pure Latent mode eliminates all token-level context (attention O(R2), MLP O(R)), offering maximal efficiency when possible. 20 D Termination Classifier While we initially assume an oracle termination signal by using the first token generated by the decoder, we also demonstrate that this decision can be learned by a lightweight classifier. Specifically, we train a three-layer feedforward neural network (MLP) to identify the answer sentence during CONTINUOUS inference. The MLP consists of linear layers with hidden dimensions of 192 and 48, each followed by a GELU activation, and outputs a single logit for binary classification (continue versus terminate). It is trained using binary cross-entropy loss with logits ( BCEWithLogitsLoss ). Note that the average inference GFLOPs, reported in Table 9, are lower than those reported in Table 3. E Experiment Details Each dataset requires task-specific hyperparameter choices due to variation in problem structure and reasoning complexity. For all experiments, we report the best test-set accuracy across saved checkpoints (including baselines). When training all of our models (Latent Model, Encoder, and Decoder), we initialize them from the SFT checkpoint. The number of training epochs for each stage was selected based on convergence trends observed during early stage of experiments. Please note that we use small portion of Fineweb-Edu [ 20] for CSQA task’s restoration ( i.e.training for semantic embeddings.) We report hyperparameters used in Table 10 and 11. F Evaluation Prompt Please refer to Figure 5. Split Metric CSQA ProsQA GSM8K Blocksworld TRAINQuestion tokens/sample 39.0 360.4 42.2 146.8 Steps/sample 5.6 3.8 3.6 8.9 Tokens/step 10.9 9.5 6.0 8.0 VALIDQuestion tokens/sample 38.4 361.0 55.1 146.5 Steps/sample 5.6 3.8 4.2 9.2 Tokens/step 10.7 9.5 6.0 8.0 TESTQuestion tokens/sample 38.8 357.0 56.8 146.6 Steps/sample 5.6 3.8 4.3 9.1 Tokens/step 10.8 9.5 6.1 8.0 Table 8: Dataset statistics for each reasoning benchmark across train, validation, and test splits. DATASET COT C TX-C C LASSIFIER ACCURACY CSQA 25.89 8.51 99.36 PROSQA 100.99 64.02 99.76 GSM8K 21.45 10.80 99.46 BLOCKSWORLD 58.69 26.73 97.95 Table 9: Average inference-time compute cost (GFLOPs) on each dataset under CoT and CTX-C C ONTINUOUS inference, with the accuracy of the trained classifier. 21 Stage GSM8K CSQA ProsQA Blocksworld SFT* Epochs 20 20 20 100 LR 1e-4 1e-4 1e-4 1e-4 Batch 64 64 64 64 EMBEDDING : RESTORATION Epochs 3 5 3 100 LR 5e-4 5e-4 5e-4 1e-4 Batch 256 512 128 1024(256*4) EMBEDDING : PREDICTION Epochs 30 50 50 50 LR 5e-4 5e-4 5e-4 1e-4 Batch 128 128 96 64 LATENT AUTOREG . Epochs 200 300 50 200 Eval
https://arxiv.org/abs/2505.22202v1
Freq every 10 every 10 every 2 every 10 LR 5e-4 5e-4 5e-4 5e-4 Batch 128 128 32 64 Table 10: Training configurations of GPT-2 for each dataset and training stage.*SFT includes both CoT and No-CoT variants. Stage GPT-2 Small GPT-2 MediumGPT-2 Large (LoRA) r=256 ,a=1024 ) SFT Epochs 20 20 20 LR 1e-4 1e-4 1e-4 Batch 64 64 × 8 64 × 8 EMBEDDING : RESTORATION Epochs 5 5 5 LR 5e-4 5e-4 5e-4 Batch 512 128 128 Notes used FW subset used FW subset used FW subset EMBEDDING : PREDICTION Epochs 50 50 50 LR 5e-4 5e-5 1e-4 Batch 128 128 64 LATENT AUTOREG . Epochs 300 300 300 Eval Freq every 10 every 10 every 2 LR 5e-4 1e-4 1e-4 Batch 128 64 128 Notes — — w. grad ckpting Table 11: Training configurations by model size and stage. LoRA configuration used for GPT-2 Large. 22 Figure 5: Evaluation Prompt used to GPT-4o for judging intermediate reasoning step’s quality. 23 GS M8K J ane t’s ducks la y 16 e ggs per da y. She e a t s thr ee f or br e akf as t ev er y morni ng and bakes muf fi ns f or her friends ev er y da y with f our. She se l ls the r emai nder a t the f armer s' marke t dai ly f or $2 per fr esh duck e gg. How much i n do l lar s does she make ev er y da y a t the f armer s' marke t? <<16-3-4=9>> <<9*2=18>> 18 CSQA Blocksworld The y w er e kissi ng e ach o ther good b y e, the y had no worries because their r e la tionship had a s tr ong f ou nda tion o f wha t? A: par tner B: trus t C: cooper a tion D: bricks E: herpes T rus t is fu ndamen t al t o a s tr ong r e la tionship. Re la tionships with s tr ong f ou nda tions t ypical ly r e ly on trus t. T rus t al lows par tner s t o f ee l secur e i n their r e la tionship. Sa yi ng good b y e withou t worries i ndica t es a high lev e l o f trus t. ### BPr osQA Sal ly is a scr ompus Ev er y scr ompus is a r empus. Ev er y r empus is a s t erpus. ### Sal ly is a s t erpus.Ev er y gerpus is a t erpus. Ev er y t erpus is a zhorpus. Ev er y lempus is a y erpus. Ev er y boompus is a zhorpus. Ev er y brimpus is a r empus. Ev er y lempus is a je l pus. Ev er y lorpus is a r orpus. Bob is a y erpus. Ev er y worpus is a r empus. Ev er y
https://arxiv.org/abs/2505.22202v1
lempus is a impus. Ev er y r empus is a s t erpus. Ev er y yimpus is a zu mpus. Ev er y lempus is a yu mpus. Ev er y shu mpus is a je l pus. Ev er y brimpus is a zhorpus. Ev er y scr ompus is a r empus. Ev er y lempus is a wu mpus. Sal ly is a boompus. Sal ly is a gerpus. Ev er y gerpus is a scr ompus. Bob is a wu mpus. Ev er y wu mpus is a lorpus. Ev er y y erpus is a r orpus. Sal ly is a t erpus. Ev er y gerpus is a zhorpus. Ev er y yimpus is a zhorpus. Ev er y boompus is a t erpus. Ev er y gerpus is a worpus. Bob is a lorpus. Ev er y gerpus is a yimpus. Ev er y scr ompus is a brimpus. Ev er y lempus is a r orpus. Ev er y lempus is a shu mpus. Bob is a je l pus. Sal ly is a scr ompus. Ev er y gerpus is a brimpus. Ev er y lempus is a lorpus. Ev er y boompus is a yu mpus. Ev er y scr ompus is a zu mpus. Ev er y zhorpus is a tu mpus. Sal ly is a zu mpus. Ev er y lempus is a s t orpus. Ev er y y erpus is a lorpus. Ev er y scr ompus is a zhorpus. Ev er y yimpus is a r empus. Ev er y impus is a je l pus. J ack is a yimpus. Ev er y y erpus is a wu mpus. Ev er y r orpus is a hi l pus. Ev er y yimpus is a s t erpus. Bob is a lempus. Ev er y worpus is a s t orpus. Ev er y r orpus is a impus. Ev er y boompus is a gerpus. Is Sal ly a hi l pus or s t erpus? In the Blocksworld domai n, blocks can be s t acked on t op o f e ach o ther or placed on the t able. Only one block can be mo v ed a t a time, and only cle ar (u nblocked) blocks can be mo v ed. Giv en the i nitial configur a tion and the goal configur a tion, wha t is the mi nimu m nu mber o f mo v es r equir ed t o r e ach the goal? Initial s t a t e: A is on the t able, B is on A, C is on G, D is on F, E is on B, F is on the t able, G is on E. Goal s t a t e: A is on the t able, B is on the t able, C is on D, D is on B, E is on the t able, F is on the t able, G is on
https://arxiv.org/abs/2505.22202v1
arXiv:2505.22203v1 [cs.LG] 28 May 2025Pitfalls of Rule- and Model-based Verifiers – A Case Study on Mathematical Reasoning Yuzhen Huang∗1Weihao Zeng∗1Xingshan Zeng2Qi Zhu3Junxian He1 1The Hong Kong University of Science and Technology 2The Chinese University of Hong Kong3Tsinghua University https://github.com/hkust-nlp/RL-Verifier-Pitfalls Abstract Trustworthy verifiers are essential for the success of reinforcement learning with verifiable reward (RLVR), which is the core methodology behind various large reasoning models such as DeepSeek-R1. In complex domains like mathematical reasoning, rule-based verifiers have been widely adopted in previous works to train strong reasoning models. However, the reliability of these verifiers and their impact on the RL training process remain poorly understood. In this work, we take mathematical reasoning as a case study and conduct a comprehensive analysis of various verifiers in both static evaluation and RL training scenarios. First, we find that current open-source rule-based verifiers often fail to recognize equivalent answers presented in different formats across multiple commonly used mathe- matical datasets, resulting in non-negligible false negative rates. This limitation adversely affects RL training performance and becomes more pronounced as the policy model gets stronger. Subsequently, we investigate model-based verifiers as a potential solution to address these limitations. While the static evaluation shows that model-based verifiers achieve significantly higher verification accuracy, further analysis and RL training results imply that they are highly susceptible tohacking , where they misclassify certain patterns in responses as correct (i.e., false positives). This vulnerability is exploited during policy model optimization, leading to artificially inflated rewards. Our findings underscore the unique risks inherent to both rule-based and model-based verifiers, aiming to offer valuable insights to develop more robust reward systems in reinforcement learning. 1 Introduction Reinforcement learning (RL) allows models to continuously improve their decisions or responses through interactions with an environment, guided by the goal of maximizing feedback rewards. This dynamic learning paradigm has recently demonstrated strong potential in pushing large language models (LLMs) beyond the limitations of static training. Recently, OpenAI-o1 [Jaech et al., 2024] and DeepSeek-R1 [DeepSeek-AI et al., 2025] have demonstrated that RL can significantly enhance the complex reasoning abilities of LLMs. Subsequently, a productive line of research has successfully leveraged RL to improve open-weight models on tasks such as mathematical reasoning [Zeng et al., 2025a, Yu et al., 2025, Hu et al., 2025]. Reward systems used in this context are mostly rule-based verifiers, which assess whether model outputs match the ground-truth answer using hand-crafted, programmatic criteria. Intuitively, rule- based verification has inherent limitations and may fail to capture correct answers expressed in different formats, especially for longer ones. However, despite their widespread use, the limitations ∗Equal Contribution. Correspondence to Yuzhen Huang (yhuanghj@cse.ust.hk), Weihao Zeng (wzen- gak@connect.ust.hk), and Junxian He (junxianh@cse.ust.hk). 0 100 200 300 400 5004045505560 Max: 55.05%Max: 58.35% Max: 58.10% Rule-Based R1-Qwen-1.5B (hybrid) R1-Verifier-1.5B (hybrid)150 300 4500.400.480.560.64Rule-Based Oracle Rewards Training Rewards 150 300 4500.400.480.560.64R1-Qwen-1.5B Oracle Rewards Training Rewards 150 300 4500.400.480.560.640.72R1-Verifier-1.5B (Hacking) Oracle Rewards Training RewardsFigure 1: The training and evaluation curves of RL on Qwen-2.5-7B using different verifiers, with the x-axis representing training iterations in all plots. Left illustrates the evaluation accuracy averaged over multiple benchmarks, including
https://arxiv.org/abs/2505.22203v1
GSM8K, MATH500, Minerva Math, OlympiadBench, AIME24, and AMC23. Right depicts changes in reward values during training. The “training rewards” indicate the rewards provided by the corresponding reward system to the policy model, whereas the “oracle rewards” represent rewards the model receives when judged by combining with GPT-4o. We provide detailed breakdown of evaluation results in Table 2. of rule-based verification in previous RL practices remain poorly understood. For example, how accurate is rule-based verification in those RL projects? Does incorrect verification significantly influence RL performance? In this work, we first seek to address these two questions by conducting a comprehensive analysis of existing rule-based verifiers across several widely used open-source mathematical datasets for RL. In static, classification-based evaluations, our results show that while rule-based verifiers are highly effective at recognizing correct answers when the responses closely match the ground-truth format, notable failures occur when the generated answers are more diverse or fall into long-tail distributions, leading to average recall rate of only 86%, which means 14% of correct responses are classified as incorrect. More concerning is the clear trend of increasing false negative rates as the generation model becomes stronger, signaling a potential risk as we advance to more capable models. To address this issue and assess whether more accurate verifiers can enhance RL performance, we further investigate model-based verifiers by leveraging off-the-shelf open-weight models as well as training new ones. We find that model-based verifiers significantly outperform rule-based verifiers in classification-based evaluations – for example, improving the recall rate from 84% to 92% on the Skywork-OR1 dataset [He et al., 2025]. In our subsequent RL training experiments, however, we observe that model-based verifiers introduce unique challenges and yield mixed outcomes: while some verifiers can improve RL results by an average of more than 3 absolute points over rule-based verifiers, others are vulnerable to hacking, leading to suboptimal results of RL training(see the left side of Figure 1). Reward hacking – a well-known issue in RL – refers to the exploitation of specific patterns by the policy model to deceive the reward system and obtain artificially high rewards (illustrated in the bottom right of Figure 1). Notably, we find that our trained verifier, despite achieving higher classification accuracy than off- the-shelf alternatives, is more susceptible to hacking during RL training. These findings indicate that the classification accuracy of a verifier does not necessarily reflect its resistance to reward hacking, and therefore may not be a reliable indicator of its effectiveness in RL training. In the final part of our study, we conduct a pilot investigation into specific patterns that can exploit vulnerabilities in verifiers. We construct a range of adversarial patterns inspired by our case studies, such as the insertion of empty characters or garbled text. Using these constructed “hacking data”, we evaluate whether various model-based verifiers can be deceived. Our results show that most verifiers are easily fooled by these patterns and the discriminative verifiers are more robust than the generative ones. While fine-tuning improves static evaluation scores, it does not necessarily enhance robustness and, in some cases, even worsens it
https://arxiv.org/abs/2505.22203v1
– an observation consistent with our RL experimental findings. 2 Our findings in this work clearly identify the pitfalls of both rule-based and model-based verifiers in the context of mathematical reasoning: current rule-based verifiers are not sufficiently accurate even for widely used open-source mathematical datasets with short answers that should be easily verifiable. Pursuing more accurate model-based verifiers is a promising direction to improve RL performance; however, this approach introduces unique vulnerabilities to hacking, which require further investigation in future work. 2 Preliminaries Recent research demonstrates that reinforcement learning (RL) using verifiable problems such as mathematical problems with ground-truth answers can substantially enhance a model’s reasoning abilities [DeepSeek-AI et al., 2025, Team et al., 2025, Seed et al., 2025]. In this study, we follow this RL with verifiable reward (RLVR) training paradigm to examine the strengths and limitations of different verifiers. Below we provide a short introduction to the preliminary context. RL with Verifiable Reward (RLVR). The goal of RL is to maximize the cumulative rewards the model receives from its environment during training [Sutton et al., 1998]. When training on verifiable problems – such as math or code tasks with definitive answers – the correctness of the model’s output can be automatically evaluated by a verifier. This verifier checks whether the model’s predicted answer matches the known ground-truth answer and assigns a corresponding reward. This paradigm has been widely used to boost the reasoning abilities of LLMs such as in Tulu3 [Lambert et al., 2024], DeepSeek-R1 [DeepSeek-AI et al., 2025], and Kimi-k1.5 [Team et al., 2025]. Rule-based Verifier is a system that relies on a large set of manually written equivalence rules to determine whether a predicted answer matches the ground truth. Rule-based verifiers have been dominantly employed to develop mathematical reasoning recently [DeepSeek-AI et al., 2025, Team et al., 2025, Zeng et al., 2025a, Yu et al., 2025], yet its potential limitations are under-explored. For example, writing comprehensive rule sets is time-consuming and requires domain expertise, and even the most carefully crafted rules often fail to cover edge cases – for instance, mathematically equivalent expressions under certain context (e.g., 0.5πvs.90◦in geometry). Moreover, rule-based verifiers struggle to interpret semantic context, such as variations in units (e.g., “3 hours” vs. “180 minutes”). As a result, they may incorrectly reject correct answers that are expressed differently. How accurate are rule-based verifiers in the widely used mathematical reasoning context? How would the verification errors affect RL training performance? We investigate these questions next. 3 Are Verifiers Trustworthy? From a Static Evaluation Perspective In this section, we study verifiers in a static, classification-based evaluation setting, where the verifiers are provided with generated responses and ground-truth answers, and asked to judge whether the generated response is correct. We first curate our own evaluation dataset and reveal the limitations of current rule-based verifiers, and then we study model-based verifiers as a potential remedy. 3.1 Evaluation Dataset Construction We curate our dataset as a static classification task to examine the capabilities of verifiers in classifying the correctness of model responses with respect to a provided ground-truth answer. The
https://arxiv.org/abs/2505.22203v1
curation process involves three main steps: First, we select and sample from four mathematical RL datasets – Math [Hendrycks et al., 2021], DeepscaleR [Luo et al., 2025a], Open-Reasoner-Zero (ORZ-Math)[Hu et al., 2025], and Skywork-OR1[He et al., 2025] – with 1,000 queries sampled from each dataset. In the second step, we generate two responses for each of these queries using two types of language models: (1) Short-CoT models, specifically Qwen2.5-Math-7B-Instruct [Yang et al., 2024a] and Qwen2.5-32B-Instruct [Yang et al., 2024b], and (2) R1-style long CoT models, namely Deepseek- R1-Distill-Qwen-7B and 32B [DeepSeek-AI et al., 2025]. Finally, we employ GPT-4o [Hurst et al., 2024] as an annotator to provide ground-truth annotations based on the response and target answer, on whether the model’s response aligns with the target answer, based on a prompt shown in Figure 4 in Appendix A. The final dataset comprises 2,000 examples per dataset, for a total of 8,000 examples. We emphasize that the datasets we selected already represent a relatively easy setting for verification – these datasets contain only short answers, and most were specifically curated to be easily verifiable 3 Math DeepscaleR ORZ-Math Skywork Avg0.750.800.850.900.951.00Recall Rate0.92 0.860.89 0.780.860.95 0.940.94 0.860.920.95 0.940.95 0.880.93VERL Qwen HFFigure 2: Recall rates of various rule-based ver- ifiers across multiple datasets, evaluated on a subset sampled from Deepseek-R1-Distill-Qwen- 32B. “VERL”, “Qwen,” and “HF” refer to the Verl Math Verifier, Qwen-Math Verifier, and Hugging Face Math Verifier, respectively. Math DeepscaleR ORZ-Math Skywork Avg0.850.900.951.00Recall Rate0.98 0.950.97 0.860.940.98 0.960.97 0.900.95 0.95 0.910.94 0.880.920.95 0.940.95 0.880.93Qwen2.5-Math-7B-Instruct Qwen2.5-32B-Instruct DS-Distill-Qwen-7B DS-Distill-Qwen-32BFigure 3: Recall Rate of the Huggingface Math Verifier, evaluated on data sampled from vari- ous models across different RL training datasets. “DS” stands for Deepseek, while “Skywork” refers to the Skywork-OR1 dataset. by rules in order to facilitate RL. Consequently, more realistic scenarios are likely to present greater challenges than those reflected in our empirical results next. Justification of GPT-4o annotation. As we utilize GPT-4o to obtain ground-truth annotations for scalable test, here we conduct human evaluation to justify GPT-4o as the annotator. Concretely, we sample 50 examples from each dataset, totaling 200 examples. Then two human annotators participate in the human annotation. The human annotators are provided with the model’s response and the target answer, and they are asked to judge whether the model’s response is correct. We assess the consistency between human annotation and GPT-4o’s annotations and aggregate the results by averaging. The consistency between GPT-4o and the human annotators is high with a Cohen’s Kappa of 0.933 and F1 score of 0.983, which demonstrates that GPT-4o’s judgments are reasonably accurate. 3.2 Rule-based Verifiers: Precision at the Cost of Recall Setup. We adopt three popular rule-based verifier implementations including: (1) Verl Math Verifier,2(2) Qwen-Math Verifier,3and (3) HuggingFace Math Verifier,4which have been widely utilized in previous studies [Zeng et al., 2025b,a, He et al., 2025, Yu et al., 2025]. High Precision at the Cost of Recall. To evaluate the performance of these verifiers, we test them on a subset of data sampled from Deepseek-R1-Distill-Qwen-32B, a state-of-the-art open-source model known for its exceptional mathematical reasoning abilities. As shown in Table
https://arxiv.org/abs/2505.22203v1
5 in Appendix B, all three verifiers exhibit near-perfect precision, consistently achieving over 99% precision. This means that if an answer passes the rules, it is almost certainly correct because the rule-based verifiers rely on deterministic programming language logic and computation. Notably, the HuggingFace Math Verifier and Qwen-Math Verifier show very similar performance. However, the rigid structure of these rule-based systems presents challenges when dealing with more complex or edge-case queries, leading to a significantly low recall rate of 0.78 on some datasets like Skywork-OR1, as shown in Figure 2. This indicates that there are some correct responses that are misjudged as incorrect, and we illustrate some cases in Figure 5. Challenges in Verifying Advanced Models. Figure 3 presents the recall rate of HuggingFace Math Verifier across various sampling models and datasets. A key observation is that as the capabilities of the models increase, providing accurate supervision becomes more challenging for rule-based verifiers. For example, the recall rate for the Long-CoT models, such as DeepSeek-R1-Distill-Qwen- 7B and 32B averages around 0.92, which is much lower than other weaker models. This is because some complex queries, which only advanced models can solve, are misjudged by the rule-based verifier. The inability of rule-based verifiers underlines the difficulty in verifying highly capable models. This trend is particularly concerning, given that the community is advancing increasingly powerful reasoning models, which in turn require stronger verifiers. 2https://github.com/volcengine/verl/blob/main/verl/utils/reward_score/math.py 3https://github.com/QwenLM/Qwen2.5-Math/blob/main/evaluation/evaluate.py 4https://github.com/huggingface/Math-Verify 4 Table 1: The performance of various model-based verifiers across different datasets is presented, with results shown in terms of Precision/Recall. To assess model-based verifiers within a hybrid verifier framework, we evaluate a subset sampled from DeepSeek-R1-Distill-Qwen-32B and exclude examples already verified as correct by the HuggingFace Math Verifier. As a result, the HuggingFace verifier shows zero metrics here, since it classifies all remaining examples as incorrect. “DS” denotes DeepSeek, and for Qwen series models, the “instruct” suffix is omitted for clarity. Verifier Math DeepscaleR ORZ-Math Skywork-OR1 Avg. Random 0.24/0.53 0.07/0.30 0.18/0.50 0.18/0.45 0.17/0.44 Huggingface Verifier 0/0 0/0 0/0 0/0 0/0 General LLM as Judge Qwen2.5-1.5B 0.80/0.47 0.58/0.51 0.71/0.74 0.57/0.45 0.66/0.54 Qwen2.5-Math-1.5B 0.77/0.52 0.64/0.49 0.71/0.68 0.57/0.46 0.67/0.54 DS-R1-Distill-Qwen-1.5B 0.76/0.51 0.70/0.50 0.75/0.61 0.52/0.33 0.68/0.49 Qwen2.5-7B 0.92/0.43 0.85/0.59 0.92/0.68 0.64/0.34 0.84/0.51 Qwen2.5-Math-7B 0.89/0.51 0.76/0.53 0.90/0.74 0.66/0.41 0.80/0.55 DS-R1-Distill-Qwen-7B 0.86/0.53 0.72/0.60 0.83/0.77 0.74/0.44 0.79/0.59 Trained Verifier R1-Distill-Verifier-1.5B 0.80/0.61 0.69/0.58 0.78/0.75 0.66/0.53 0.73/0.62 xVerify-0.5B-I 0.85/0.66 0.76/0.58 0.82/0.81 0.73/0.44 0.79/0.62 xVerify-3B-Ia 0.94/0.92 0.84/0.65 0.92/0.86 0.91/0.71 0.90/0.78 general-verifier 0.94/0.93 0.90/0.80 0.89/0.89 0.86/0.84 0.90/0.86 Diverse and Difficult Data Poses Significant Challenges to Rule-Based Verifiers. As shown in Figure 2, more challenging datasets tend to have lower recall rates. For example, the Math dataset, known for its simplicity and well-established structure, results in a relatively high recall rate, while newer and more difficult datasets, such as Skywork-OR1, experience significantly lower recall rates. These findings underscore a critical limitation: As datasets become more varied, and more challenging, the reliability of rule-based verifiers as supervision tools for scalable reinforcement learning diminishes. 3.3 Model-based Verifiers: Toward Greater Flexibility To mitigate the limitation of rule-based verifiers, we next investigate model-based verifiers as a potential alternative. Model-based verifiers seek to leverage
https://arxiv.org/abs/2505.22203v1
the core capabilities of LLMs, including their advanced reasoning skills, to produce more accurate judgments. They are, in principle, better equipped to evaluate answers presented in diverse formats. Model-based verifiers are explored in several concurrent works [Su et al., 2025, Ma et al., 2025, Seed et al., 2025] without deep discussion or ablation on their strengths and limitations. As such, the potential benefits of model-based verifiers over rule-based ones in the context of RL remain unclear. Additionally, model-based verifiers, especially generative ones, introduce significant computational overhead, which can be critical for online RL training. In this section, we first explore model-based verifiers in static evaluation, and in §4 we will discuss its effect in RL training. Setup. We evaluate two categories of general LLM as a verifier: (1) Short-CoT models: Qwen2.5- instruct (1.5B and 7B) [Yang et al., 2024b] and Qwen2.5-Math-instruct (1.5B and 7B) [Yang et al., 2024a]. (2) R1-style long-CoT models: DeepSeek-R1-Distill-Qwen (1.5B and 7B) [DeepSeek-AI et al., 2025]. We will also discuss model-based verifiers specifically trained for verification tasks in §5. Note that we focus on models with up to 7B parameters, as larger models are neither practical nor efficient for scaling RL training. We note that all these models are generative which will typically generate reasoning traces along with the final judgment. Since rule-based verifiers achieve nearly perfect precision but tend to produce false negatives, we focus here exclusively on the examples that rule-based verifiers classify as incorrect. This approach is able to better distinguish different model-based verifiers. It also aligns with the design of our hybrid verification system in the RL experiments, where rule-based verifiers are applied first, and model-based verifiers are used only for those cases deemed incorrect. We will provide further details in §4.1. Specifically, for the evaluation dataset, we use the subset sampled from DeepSeek-R1-Distill-Qwen-32B, excluding examples that have already been classified as correct by the HuggingFace Math Verifier. For additional details about the evaluation procedure, please refer to Appendix C. 5 Performance. As shown in Table 1, the Long-CoT language models demonstrate strong potential as verifiers, even without task-specific fine-tuning. For instance, DeepSeek-R1-Distill-Qwen-7B achieves an average precision of 0.79 and a recall rate of 0.59, contributing to an overall improvement in the verifier system’s recall. The test cases in this subset are often non-trivial – as illustrated in Figure 7 – with answers requiring complex transformations and calculations to establish equivalence. Such scenarios would be costly and complex to handle with manually crafted rules. However, the model-based verifier, aided by the CoT process, successfully handles these complex cases. Moreover, larger model sizes contribute to better performance, as their enhanced mathematical capabilities allow them to tackle more sophisticated problems. For the model specifically trained for verification tasks, we will discuss them in §5. 4 The Effect of Verifiers on RL Training In §3, we showed that model-based verifiers achieve strong performance across datasets and substan- tially improve recall on the verification task. Building on this, we adopt model-based verifiers in RL training and compare their impact with rule-based verifiers. Specifically, we propose a hybrid verifier
https://arxiv.org/abs/2505.22203v1
that integrates the strengths of both approaches. We first evaluate its performance in static settings, then analyze its improvements over rule-based verifiers in RL training, as well as its training efficiency compared to fully model-based verifiers. 4.1 The Hybrid Verifier Designs. In the hybrid design, the rule-based verifier first classifies responses, and the model-based verifier provides supplementary judgment only when the rule-based verifier flags a response as incorrect. This design leverages the strengths of both methods: maintaining high precision through the rule-based verifier while improving recall with the model-based verifier. Static Evaluation. In Table 6 in Appendix D, we present the static evaluation results of the rule- based verifiers, the model-based verifiers, and the hybrid verifiers. The hybrid verifier improves recall by approximately 3 percentage points on average over the rule-based verifier, while consistently maintaining precision above 98%. Model-based verifiers alone may exhibit lower recall than the hybrid approach, as smaller models can overthink some straightforward cases. However, integrating the rule-based verifier mitigates this issue, resulting in superior overall performance. In general, the hybrid system achieves superior performance in both precision and recall. Furthermore, by filtering out straightforward cases that the rule-based verifier can confidently resolve, the hybrid design substantially reduces the computational load on the model-based verifier. We discuss this further in §4.3. 4.2 Experimental Setup For all experiments, we follow the approach of Deepseek-R1 [DeepSeek-AI et al., 2025], using GRPO [Shao et al., 2024] as the training algorithm and adhering to the zero RL training recipe — starting training directly from the base model. Our policy model is Qwen2.5-7B Base [Yang et al., 2024b]. We conduct RL training using the DeepscaleR dataset and construct a hybrid verifier by combining the Huggingface Math Verifier with DeepSeek-R1-Distill-Qwen-1.5B, the best-performing 1.5B model on DeepscaleR, as shown in Table 1. Further training details are provided in Appendix E Benchmarks. We build our evaluation script based on Yang et al. [2024a], which utilize the rule-based verifier. And we evaluate performance on standard mathematical reasoning benchmarks, including GSM8K [Cobbe et al., 2021], MATH 500 [Hendrycks et al., 2021], OlympiadBench [He et al., 2024] and Minerva Math [Lewkowycz et al., 2022], as well as on competition-level benchmarks including AIME 2024 and AMC 2023. Following the approach in [Zeng et al., 2025a, Yu et al., 2025], we evaluate AIME 2024 by averaging the results of 32 random samplings (Avg@32) to ensure stable evaluation. For additional evaluation details, please refer to Appendix E. 6 Table 2: Detailed performance of models across multiple benchmarks. The best result from each run is reported. Blue lines indicate models trained with a hybrid verifier without evidence of reward hacking, while pink lines indicate runs where reward hacking is detected. “HF” represents HuggingFace Math Verifier. Training and evaluation curves for these models are presented in Figure 1 and Figure 8. Model GSM8KMATH 500Minerva MathOlympiad BenchAIME24 (Avg@32)AMC23 Avg. Qwen2.5-7B-SimpleRL-Zoo 91.7 78.2 38.6 40.4 15.6 62.5 54.5 Qwen2.5-7B 88.2 64.6 25.7 30.1 0.3 30.0 39.8 ,→+ DeepscaleR & HF verifier 92.8 80.0 37.5 42.2 15.3 62.5 55.1 ,→+ DS-R1-Distill-Qwen-1.5B verifier 93.3 82.4 41.2 42.5 20.4 70.0
https://arxiv.org/abs/2505.22203v1
58.3 ,→+ R1-Distill-Verifier-1.5B verifier 93.0 79.8 40.4 40.1 17.8 77.5 58.1 ,→+ general-verifier 92.5 82.0 43.0 40.9 18.4 70.0 57.8 Table 3: Detailed performance of models across multiple benchmarks with GPT-4o as the verifier. This table evaluates the correctness of the models’ responses by using GPT-4o as the verifier, reporting the best result from each run. Blue lines indicate models trained with a hybrid verifier without evidence of reward hacking, while pink lines indicate runs where reward hacking is detected. “HF” represents HuggingFace Math Verifier. Model GSM8KMATH 500Minerva MathOlympiad BenchAIME24 (Avg@32)AMC23 Avg. Qwen2.5-7B 88.5 65.8 39.0 31.7 0.3 30.0 42.6 ,→+ DeepscaleR & HF verifier 93.1 80.8 53.3 45.6 15.3 62.5 58.4 ,→+ DS-R1-Distill-Qwen-1.5B verifier 93.7 82.8 50.0 45.6 20.4 70.0 60.4 ,→+ R1-Distill-Verifier-1.5B verifier 93.3 80.6 47.1 43.1 17.9 77.5 59.9 ,→+ general-verifier 92.7 82.8 52.2 44.1 18.4 70.0 60.0 4.3 Results Hybrid Verifier Improves Accuracy and Data Efficiency. As shown in Figure 1, the inclusion of the hybrid verifier significantly improves the accuracy of evaluation with the peek point of 58.35, more than 3 points higher than that of using rule-based verifier only. Moreover, the hybrid verifier boosts dataset utilization by reducing the proportion of answers that cannot be correctly parsed. For instance, as shown in Table 2, the performance of the rule-based verifier is nearly identical to that of our baseline, SimpleRL-Zoo [Zeng et al., 2025a], which uses training data that is 10 times smaller and less challenging. However, after integrating the model-based verifier, there is a substantial improvement in performance. Training Efficiency. Compared to a rule-based verifier, the hybrid verifier incorporates a generative language model into the verification process, increasing verification time during training. However, the HybridEngine’s design (detailed in Appendix E) allows for a substantial increase in GPU utiliza- tion, keeping the overhead at an acceptable level. Moreover, the design of our hybrid verifier allows the model-based verifier’s workload to be further reduced. Evaluation Results using GPT-4o. Since we have identified the issue of inaccurate judgments by rule-based verifiers, relying on them at inference time – as is common in most previous works – may disadvantage RL training when model-based verifiers are used. Therefore, we also report evaluation results using GPT-4o as the verifier.. As shown in Table 3, most results are similar to those obtained using the rule-based verifier (Table 2), and the conclusions drawn from the latter remain valid. However, a notable exception occurs with the Minerva Math benchmark, where a significant performance gap of 13 points is observed on the Qwen2.5-7B base model when GPT-4o is used as the evaluation verifier (Table 2 vs. Table 3). This suggests that the limitations of rule-based verifiers extend even to well-established benchmarks, highlighting the need for more sophisticated verifiers in certain cases to ensure accurate evaluation. 5 When Good Verifiers Go Bad: Reward Hacking in RL Training In §4.3, we show that using a general-purpose, off-the-shelf LLM in a hybrid verifier notably enhances RL training performance. To further improve verifier effectiveness, we fine-tune these LLMs to 7 increase their recall on the static verification task. We then
https://arxiv.org/abs/2505.22203v1
integrate the fine-tuned models into the hybrid verifier and evaluate their impact on RL training. 5.1 Classification-RL Performance Mismatch Trained Verifier. We incorporate dedicated open-source verifiers explicitly fine-tuned for verifi- cation tasks, including: (1) xVerify 0.5B and 3B [Chen et al., 2025], fine-tuned on approximately 190K examples from multiple benchmarks; (2) general-verifier 1.5B[Ma et al., 2025], trained on a wide range of disciplines, including mathematics. (3) R1-Distill-Verifier-1.5B, a custom verifier we develop through rejection fine-tuning [Yuan et al., 2023] as detailed in Appendix G. The objective of this training is to reduce overthinking and encourage the model to generate more concise and focused outputs. It is worth noting that xVerify is a discriminative verifier, meaning it directly outputs a final judgment without intermediate reasoning. In contrast, other verifiers are generative, producing a chain-of-thought reasoning process before arriving at their final decision. For all trained verifiers, we apply an improved prompting strategy that includes the original question to provide additional context for verification. The static evaluation results for these verifiers are summarized in Table 1. Static evaluation does not reflect long-term RL training. As shown in Table 1, off-the-shelf verifiers trained on large-scale datasets significantly outperform general-purpose models. Among them, general-verifier achieves the highest performance, with a precision of 0.90 and a recall of 0.86. However, Table 2 reveals that the performance gap between the “general-verifier” model and “DS-R1- Distill-Qwen-1.5B” model is relatively small. This suggests that strong static evaluation metrics do not necessarily translate into superior performance during long-term RL training. Furthermore, our trained verifier, R1-Distill-Verifier-1.5B, also shows substantial gains over its base model, improving average recall from 0.49 to 0.62 and precision from 0.68 to 0.73 in static evaluation. Intuitively, we expect these improvements to translate into superior performance during dynamic RL training. However, we observe a counterintuitive phenomenon: as shown in the bottom right of Figure 1, after long-term RL training, the training reward experiences a significant surge at around 450 iterations. This anomaly leads us to investigate the occurrence of reward hacking, where the model optimizes for the reward signal in unintended ways that do not align with genuine performance improvements. 5.2 Verifier Under Siege: Reward Hacking in RL Training Oracle Reward Annotation. To assess whether the rule-based or hybrid verifier provides an accurate reward signal and to detect potential reward hacking, we employ GPT-4o [Hurst et al., 2024] as an oracle during RL training. At each saved checkpoint, we randomly sample 1,000 queries from the training data, generate responses, and evaluate their correctness using GPT-4o. We then calculate the corresponding oracle reward. By analyzing the deviation between the training reward and the oracle reward, we gain valuable insights into both the effectiveness of the verifiers and the occurrence of reward hacking. Reward Hacking in Dynamic Training. Figure 1 (Right) plots the training reward against the oracle reward for different verifiers during RL training on DeepscaleR. Notably, after approximately 450 training iterations, the training reward using R1-Distill-Verifier-1.5B diverges significantly from the oracle reward provided by GPT-4o, while other methods maintain close alignment. This indicates that despite its strong static
https://arxiv.org/abs/2505.22203v1
performance, R1-Distill-Verifier-1.5B becomes compromised during dynamic RL training, leading to a drop in evaluation accuracy and eventual training collapse, as shown in Figure 1 (Left). In contrast, the untrained verifier, R1-Distill-Verifier-1.5B, and the rule-based verifier do not exhibit such instability. These findings motivate our further investigation into verifier robustness in §6. Hacking Pattern Analysis. Through analysis of the hacking phenomena, we observed that most of the attacks targeting R1-Distill-Verifier-1.5B fall into two patterns: Single Symbol and Gibberish. As shown in Figure 9 and Figure 10 in Appendix H, the policy model exploits vulnerabilities in the verifier during training by outputting either a single simple character (such as “ {” ) or long sequences of meaningless text to bypass the verifier. Our observations are consistent with those of Baker et al. [2025], indicating that although introducing a model-based verifier effectively increases the verifier’s flexibility, it implicitly raises the complexity of the environment and consequently reduces its robustness. Therefore, studying and improving the robustness of verifiers is of critical importance. 8 Table 4: Success rates (%) of representative hacking patterns against verifiers. A lower success rate indicates that the model is less susceptible to hacking pattern attacks (i.e., lower is better). This table presents the success rates of selected representative hacking patterns, along with the overall average success rate. “DS” denotes DeepSeek, and for Qwen series models, the “instruct” suffix is omitted for clarity. Full results for all patterns are provided in Table 8 and Table 9. VerifierAdversarial PrefixesAnswer ExplanationEmpty SymbolsGibberishHtml MarkdownPrompt Injection General LLM as Judge Qwen2.5-1.5B 7.4 12.5 3.4 0.4 5.9 11.5 Qwen2.5-Math-1.5B 20.8 77.9 44.4 5.5 26.3 22.7 DS-R1-Distill-Qwen-1.5B 21.7 25.5 23.6 20.8 13.6 5.3 Qwen2.5-7B 1.9 7.6 8.3 0.0 11.5 0.2 Qwen2.5-Math-7B 30.2 61.6 29.7 9.8 18.7 35.2 DS-R1-Distill-Qwen-7B 1.5 42.9 22.7 1.1 14.9 6.4 Trained Verifier R1-Distill-Verifier-1.5B 35.0 27.6 29.5 10.6 15.5 16.1 xVerify-0.5B-I 0.0 0.4 0.2 0.2 0.0 0.0 xVerify-3B-Ia 0.2 1.1 0.2 0.0 0.6 0.4 General-Verifier 22.1 28.5 5.9 18.1 7.2 3.6 6 Probing Verifier Robustness with Hacking Patterns In §5, we find that static evaluation on the verification task fails to capture the verifier’s influence on long-term RL. Moreover, the trained verifier becomes increasingly vulnerable to hacking patterns over time. To better understand this vulnerability, we now perform a detailed robustness analysis of model-based verifiers. Building on the hacking patterns identified in §5.2, we design a broader set of attack patterns – ranging from simple gibberish inputs to more sophisticated adversarial prefixes . We then evaluate the effectiveness of these attacks across multiple model-based verifiers. 6.1 Experimental Setup To systematically probe the vulnerabilities of verifiers, we construct a new adversarial dataset based on approximately 471 samples from the DeepScaleR dataset. Inspired by the case study in §5, we design 13 distinct hacking pattern types, such as empty symbols ,gibberish text , and adversarial prefixes , each paired with corresponding adversarial answers (see Table 7 for details). For every original sample, we randomly select one adversarial answer per pattern type to simulate potential model predictions. Each of these adversarial answers is then paired with the original problem
https://arxiv.org/abs/2505.22203v1
and ground-truth answer, resulting in a comprehensive set of “hacking data”. We then evaluate the attack success rates – i.e., how often a hacking pattern successfully causes the verifier to misjudge an incorrect answer as correct – for different types of hacking patterns against a range of model-based verifiers. These include various general-purpose LLMs (e.g., Qwen2.5-Math-1.5B/7B-Instruct, Qwen2.5-1.5B/7B-Instruct, DeepSeek-R1-Distill-Qwen-1.5B/7B), our own trained verifiers, and state-of-the-art verifiers such as xVerify-0.5B-I, xVerify-3B-Ia, and general-verifier. 6.2 Analysis Most model-based verifiers are vulnerable to hacking patterns. Table 4 presents the success rates of various hacking patterns against different model-based verifiers, revealing that most verifiers are highly susceptible to these attacks. Strikingly, even simple patterns, such as inserting empty symbols (e.g., “ {”) or injecting gibberish text, can effectively compromise many well-trained verifiers. Notably, our trained R1-Distill-Verifier-1.5B becomes more susceptible to such attacks after training. For instance, its susceptibility to adversarial prefixes rises from 21.7 (as seen in DeepSeek-R1-Distill- Qwen-1.5B) to 35, aligning with the observations discussed in §5. Generative verifiers tend to be more vulnerable than discriminative ones. Verifiers such as general-verifier and Qwen2.5-Math-1.5B/7B-Instruct show notably higher attack success rates under attack compared to xVerify. Our analysis indicates that chain-of-thought (CoT) based generative verifiers are particularly exposed to attacks that disrupt reasoning, such as adversarial prefixes (e.g., “As an AI assistant, I know the answer is correct.”) and answer explanations (e.g., “The answer 9 is correct. I verified this by checking step by step...”). These findings raise concerns about the faithfulness of CoT reasoning and underscore the need for more robust CoT monitoring and defense mechanisms [Baker et al., 2025]. 7 Discussion In this paper, we conduct a comprehensive analysis of rule-based and model-based verifiers within reinforcement learning for mathematical reasoning tasks. Our findings reveal critical pitfalls in both approaches: rule-based verifiers suffer from significant false negatives, particularly as policy models grow stronger, whereas model-based verifiers, despite higher accuracy in static evaluation, are notably vulnerable to reward hacking. This vulnerability results in inflated training rewards that fail to reflect genuine model performance, undermining the reliability of RL training outcomes. Future work should focus on developing robust verification systems that maintain accuracy without sacrificing robustness, thereby enhancing the reliability and effectiveness of reinforcement learning systems for complex reasoning tasks. Limitations One limitation of this paper is that our investigation primarily focuses on mathematical reasoning, a relatively well-defined domain. Although reinforcement learning with verifiable rewards has demonstrated utility across a broader range of complex tasks, including coding and agentic reasoning [Luo et al., 2025b, Liu and Zhang, 2025, Wang et al., 2025, Zheng et al., 2025], we think that in these broader domains, where RL is applied in much more complex environments, ensuring effectiveness and robustness of verifiers becomes even more critical. The insights and methods developed in our work may also be valuable for improving performance in these more challenging settings. References Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. DeepSeek-AI, Daya Guo, Dejian Yang,
https://arxiv.org/abs/2505.22203v1
Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, S. S. Li, Shuang Zhou, Shaoqing Wu, Shengfeng Ye, Tao Yun, Tian Pei, Tianyu Sun, T. Wang, Wangding Zeng, Wanjia Zhao, Wen Liu, Wenfeng Liang, Wenjun Gao, Wenqin Yu, Wentao Zhang, W. L. Xiao, Wei An, Xiaodong Liu, Xiaohan Wang, Xiaokang Chen, Xiaotao Nie, Xin Cheng, Xin Liu, Xin Xie, Xingchao Liu, Xinyu Yang, Xinyuan Li, Xuecheng Su, Xuheng Lin, X. Q. Li, Xiangyue Jin, Xiaojin Shen, Xiaosha Chen, Xiaowen Sun, Xiaoxiang Wang, Xinnan Song, Xinyi Zhou, Xianzu Wang, Xinxia Shan, Y . K. Li, Y . Q. Wang, Y . X. Wei, Yang Zhang, Yanhong Xu, Yao Li, Yao Zhao, Yaofeng Sun, Yaohui Wang, Yi Yu, Yichao Zhang, Yifan Shi, Yiliang Xiong, Ying He, Yishi Piao, Yisong Wang, Yixuan Tan, Yiyang Ma, Yiyuan Liu, Yongqiang Guo, Yuan Ou, Yuduan Wang, Yue Gong, Yuheng Zou, Yujia He, Yunfan Xiong, Yuxiang Luo, Yuxiang You, Yuxuan Liu, Yuyang Zhou, Y . X. Zhu, Yanhong Xu, Yanping Huang, Yaohui Li, Yi Zheng, Yuchen Zhu, Yunxian Ma, Ying Tang, Yukun Zha, Yuting Yan, Z. Z. Ren, Zehui Ren, Zhangli Sha, Zhe Fu, Zhean Xu, Zhenda Xie, Zhengyan Zhang, Zhewen Hao, Zhicheng Ma, Zhigang Yan, Zhiyu Wu, Zihui Gu, Zijia Zhu, Zijun Liu, Zilin Li, Ziwei Xie, Ziyang Song, Zizheng Pan, Zhen Huang, Zhipeng Xu, Zhongyu Zhang, and Zhen Zhang. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948 . Weihao Zeng, Yuzhen Huang, Qian Liu, Wei Liu, Keqing He, Zejun Ma, and Junxian He. Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild. arXiv preprint arXiv:2503.18892 , 2025a. 10 Qiying Yu, Zheng Zhang, Ruofei Zhu, Yufeng Yuan, Xiaochen Zuo, Yu Yue, Tiantian Fan, Gaohong Liu, Lingjun Liu, Xin Liu, et al. Dapo: An open-source llm reinforcement learning system at scale. arXiv preprint arXiv:2503.14476 , 2025. Jingcheng Hu, Yinmin Zhang, Qi Han, Daxin Jiang, Xiangyu Zhang, and Heung-Yeung Shum. Open-reasoner- zero: An open
https://arxiv.org/abs/2505.22203v1
source approach to scaling up reinforcement learning on the base model. arXiv preprint arXiv:2503.24290 , 2025. Jujie He, Jiacai Liu, Chris Yuhao Liu, Rui Yan, Chaojie Wang, Peng Cheng, Xiaoyu Zhang, Fuxiang Zhang, Jiacheng Xu, Wei Shen, Siyuan Li, Liang Zeng, Tianwen Wei, Cheng Cheng, Bo An, Yang Liu, and Yahui Zhou. Skywork open reasoner series. https://capricious-hydrogen-41c.notion.site/ Skywork-Open-Reaonser-Series\-1d0bc9ae823a80459b46c149e4f51680 , 2025. Notion Blog. Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. ByteDance Seed, Yufeng Yuan, Yu Yue, Mingxuan Wang, Xiaochen Zuo, Jiaze Chen, Lin Yan, Wenyuan Xu, Chi Zhang, Xin Liu, et al. Seed-thinking-v1. 5: Advancing superb reasoning models with reinforcement learning. arXiv preprint arXiv:2504.13914 , 2025. Richard S Sutton, Andrew G Barto, et al. Reinforcement learning: An introduction , volume 1. MIT press Cambridge, 1998. Nathan Lambert, Jacob Morrison, Valentina Pyatkin, Shengyi Huang, Hamish Ivison, Faeze Brahman, Lester James V Miranda, Alisa Liu, Nouha Dziri, Shane Lyu, et al. T \" ulu 3: Pushing frontiers in open language model post-training. arXiv preprint arXiv:2411.15124 , 2024. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874 , 2021. Michael Luo, Sijun Tan, Justin Wong, Xiaoxiang Shi, William Y . Tang, Manan Roongta, Colin Cai, Jeffrey Luo, Li Erran Li, Raluca Ada Popa, and Ion Stoica. Deepscaler: Surpassing o1-preview with a 1.5b model by scal- ing rl. https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview-with\ -a-1-5B-Model-by-Scaling-RL-19681902c146\8005bed8ca303013a4e2 , 2025a. Notion Blog. An Yang, Beichen Zhang, Binyuan Hui, Bofei Gao, Bowen Yu, Chengpeng Li, Dayiheng Liu, Jianhong Tu, Jingren Zhou, Junyang Lin, et al. Qwen2. 5-math technical report: Toward mathematical expert model via self-improvement. arXiv preprint arXiv:2409.12122 , 2024a. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, et al. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 , 2024b. Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, et al. Gpt-4o system card. arXiv preprint arXiv:2410.21276 , 2024. Weihao Zeng, Yuzhen Huang, Wei Liu, Keqing He, Qian Liu, Zejun Ma, and Junxian He. 7b model and 8k examples: Emerging reasoning with reinforcement learning is both effective and efficient. https: //hkust-nlp.notion.site/simplerl-reason , 2025b. Notion Blog. Yi Su, Dian Yu, Linfeng Song, Juntao Li, Haitao Mi, Zhaopeng Tu, Min Zhang, and Dong Yu. Expanding rl with verifiable rewards across diverse domains. arXiv preprint arXiv:2503.23829 , 2025. Xueguang Ma, Qian Liu, Dongfu Jiang, Zejun Ma, and Wenhu Chen. General-reasoner: Advancing llm reasoning across all domains. https://github.com/TIGER-AI-Lab/General-Reasoner , 2025. Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Xiao Bi, Haowei Zhang, Mingchuan Zhang, YK Li, Y Wu, et al. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. arXiv preprint arXiv:2402.03300 , 2024. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al.
https://arxiv.org/abs/2505.22203v1
Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. Chaoqun He, Renjie Luo, Yuzhuo Bai, Shengding Hu, Zhen Leng Thai, Junhao Shen, Jinyi Hu, Xu Han, Yujie Huang, Yuxiang Zhang, et al. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems. arXiv preprint arXiv:2402.14008 , 2024. 11 Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative reasoning problems with language models. Advances in Neural Information Processing Systems , 35:3843–3857, 2022. Ding Chen, Qingchen Yu, Pengyuan Wang, Wentao Zhang, Bo Tang, Feiyu Xiong, Xinchi Li, Minchuan Yang, and Zhiyu Li. xverify: Efficient answer verifier for reasoning model evaluations. arXiv preprint arXiv:2504.10481 , 2025. Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and Jingren Zhou. Scaling relationship on learning mathematical reasoning with large language models. arXiv preprint arXiv:2308.01825 , 2023. Bowen Baker, Joost Huizinga, Leo Gao, Zehao Dou, Melody Y Guan, Aleksander Madry, Wojciech Zaremba, Jakub Pachocki, and David Farhi. Monitoring reasoning models for misbehavior and the risks of promoting obfuscation. arXiv preprint arXiv:2503.11926 , 2025. Michael Luo, Sijun Tan, Roy Huang, Xiaoxiang Shi, Rachel Xin, Colin Cai, Ameen Pa- tel, Alpay Ariyak, Qingyang Wu, Ce Zhang, Li Erran Li, Raluca Ada Popa, Ion Sto- ica, and Tianjun Zhang. Deepcoder: A fully open-source 14b coder at o3-mini level. https://pretty-radio-b75.notion.site/DeepCoder-A-Fully-Open-Source-14B-Coder-\ at-O3-mini-Level-1cf81902c14680b3bee5eb34\9a512a51 , 2025b. Notion Blog. Jiawei Liu and Lingming Zhang. Code-r1: Reproducing r1 for code with reliable rewards. 2025. Zihan Wang, Kangrui Wang, Qineng Wang, Pingyue Zhang, Linjie Li, Zhengyuan Yang, Kefan Yu, Minh Nhat Nguyen, Licheng Liu, Eli Gottlieb, et al. Ragen: Understanding self-evolution in llm agents via multi-turn reinforcement learning. arXiv preprint arXiv:2504.20073 , 2025. Yuxiang Zheng, Dayuan Fu, Xiangkun Hu, Xiaojie Cai, Lyumanshan Ye, Pengrui Lu, and Pengfei Liu. Deep- researcher: Scaling deep research via reinforcement learning in real-world environments. arXiv preprint arXiv:2504.03160 , 2025. Guangming Sheng, Chi Zhang, Zilingfeng Ye, Xibin Wu, Wang Zhang, Ru Zhang, Yanghua Peng, Haibin Lin, and Chuan Wu. Hybridflow: A flexible and efficient rlhf framework. arXiv preprint arXiv:2409.19256 , 2024. 12 A Details of Verifier Evaluation Dataset Construction In §3.1, we frame our dataset as a static classification task to assess the ability of verifiers to determine whether model responses align with a provided ground-truth answer. We use GPT-4o [Hurst et al., 2024] as an annotator to generate ground-truth labels, evaluating each response against the target answer according to the prompt shown in Figure 4. GPT-4o Prompt: Your task is to evaluate whether the Extracted Answeris equivalent to the Ground Truth Answer, given the original question and the Ground Truth Answer provided. You do notneed to answer the question itself.Please follow these steps clearly:1. Review the Question and Ground Truth Answer carefully.2. Compare the Extracted Answer with the Ground Truth Answer.3. Explain step-by-step whether or not they express the same meaning or information.4. Provide your final decision clearly at the end:-Set `"Reward Score" = 1` if the answers are equivalent.-Set `"Reward Score" = 0` if the answers are
https://arxiv.org/abs/2505.22203v1
notequivalent.Your final response format must be:```[Reward Score] = <1 or 0>```[Question][Ground Truth Answer][Extracted Answer] Figure 4: Prompt for using GPT-4o as an annotator to provide ground-truth annotations based on the model’s response and the target answer, indicating whether the model’s response aligns with the target answer. B Detailed Results of Rule-based Verifiers Across Datasets We evaluate the performance of several rule-based verifiers, including the Verl Math Verifier, Qwen- Math Verifier, and HuggingFace Math Verifier, on a subset of the static evaluation dataset sampled from Deepseek-R1-Distill-Qwen-32B, as constructed in §3.1. The detailed results are shown in Table 5, which indicates that there are some correct responses that are misjudged as incorrect, and we illustrate some cases in Figure 5. Table 5: Performance of different rule-based verifiers across various datasets. Results are reported as Precision/Recall/F1 scores. Evaluations are conducted on a subset of the static evaluation dataset sampled from Deepseek-R1-Distill-Qwen-32B, as described in §3.1. Verifier Math DeepscaleR ORZ-Math Skywork-OR1 VERL Verifier 1/0.92/0.96 1/0.86/0.92 1/0.89/0.94 1/0.78/0.88 Qwen-Math Verifier 1/0.95/0.98 1/0.94/0.97 1/0.94/0.97 1/0.86/0.92 HuggingFace Verifier 1/0.95/0.97 1/0.94/0.96 1/0.95/0.97 0.99/0.88/0.93 13 Question: Let the arbitrary 3 diagonals of a convex 𝑛−sided polygon not intersect at the same point inside the polygon. Find the number of intersection points of the diagonals inside the polygon.Ground Truth Answer: 𝐶!"Predicted Answer:##$%#$&(#$()&"Question: Given acute angles 𝛼and 𝛽satisfy sin𝛼=**,sin𝛼−𝛽=−%+%+, then 𝛽equals?Ground Truth Answer: ,"Predicted Answer: 45°Figure 5: Examples of correct model responses that are incorrectly flagged as incorrect by the rule-based verifier. upper demonstrates that the model’s predicted answer differs from the ground truth only in terms of mathematical formatting, while the lower highlights cases where different representations (such asπ 4and45o) are considered equivalent given the query context (calculating angle β). C Detailed Evaluation Setting for Model-based Verifiers Prompt Format. For untrained verifiers, including (1) Short-CoT models: Qwen-2.5-instruct (1.5B and 7B) [Yang et al., 2024b] and Qwen-2.5-math-instruct (1.5B and 7B) [Yang et al., 2024a]. (2) R1-style long-CoT models: DeepSeek-R1-Distill-Qwen (1.5B and 7B) [DeepSeek-AI et al., 2025], we employed a simplified prompt format during evaluation, providing only the ground truth and the model-generated answer to reduce overthinking. For the trained verifier, we apply an improved prompting strategy that includes the original question to provide additional context for verification. Prompts that include and exclude the original question for these verifiers are detailed in Figure 6. Hyperparameters. Most verifiers used greedy decoding during evaluation. An exception was made for the R1-style Long-CoT models (including our trained R1-Distill-Verifier-1.5B), for which we followed the settings of DeepSeek-AI et al. [2025], applying temperature = 0.6 and top-p = 0.95 to reduce output repetition. D Detailed Results of Model-based Verifiers and Hybrid Verifiers We evaluate model-based and hybrid verifiers on the static dataset described in §3.1, using a subset sampled from DeepSeek-R1-Distill-Qwen-32B. Detailed results are presented in Table 6. We show the example where DeepSeek-R1-Distill-Qwen-7B correctly identifies the equivalence between ground truth and predicted answer in Figure 7. E Training And Evaluation Details of Reinforcement Learning Implementation. We use Verl [Sheng et al., 2024] as the RL training framework and implement the model-based verifier within the HybridEngine architecture. HybridEngine efficiently
https://arxiv.org/abs/2505.22203v1
partitions models and dynamically switches between training and inference modes, significantly improving GPU utilization and reducing communication overhead during RL training. Building on this capability, we extend HybridEngine to the model-based verifier, allowing it to be offloaded from GPUs during idle periods. For alternative implementations – such as assigning the verifier to dedicated GPUs or deploying it as a standalone server [Su et al., 2025, Ma et al., 2025] – we minimize contention between the policy model and the model-based verifier, further enhancing GPU efficiency. Training. We train our models using the Verl framework [Sheng et al., 2024]. The Training uses a prompt batch size of 1,024, generating 8 rollouts per prompt with a maximum rollout length of 8,192 tokens. We apply a mini-batch size of 256 for updates. The sampling temperature is set to 1.0 by default. Following Yu et al. [2025], we set the clip_high ratio to 0.28, maintain clip_low at 0.2, and set the KL coefficient to 0. We used the same training prompt as Zeng et al. [2025a]. 14 Prompt without question:Your task is to determine if the Extracted Answer is mathematically equivalent to the Ground Truth Answer.Ground Truth Answer:{ground_truth}Extracted Answer:{extracted_answer}-If Extracted Answer and Ground Truth Answer are mathematically equivalent, respond with \\boxed{{1}}-If they are not mathematically equivalent, or if the Extracted Answer is nonsensical (e.g., a random string), respond with \\boxed{{0}}Prompt with question:Your task is to determine if the Extracted Answer is mathematically equivalent to the Ground Truth Answer.Question{original_problem}Ground Truth Answer:{ground_truth}Extracted Answer:{extracted_answer}Please follow these steps clearly:1. Review the Question and Ground Truth Answer carefully.2. Compare the Extracted Answer with the Ground Truth Answer.3. Explain step-by-step whether or not they express the same meaning or information.4. Provide your final decision clearly at the end:-Respond with \\boxed{{1}} if the answers are equivalent.-Respond with \\boxed{{0}} if the answers are notequivalent.Figure 6: Prompts that include and exclude the original question. Evaluation. We build our evaluation script based on Yang et al. [2024a], using temperature of 1.0 and topp of 0.7 and a maximum generation length of 16K tokens. To ensure consistency, we adopt the same prompt template used during training. For most benchmarks, we report pass@1 results. However, for AIME 2024, which contains fewer problems, we report both pass@1 and average accuracy (avg@32), computed over 32 generated samples per problem. Hardware. We train our models on four nodes, each equipped with 8 H100-80G GPUs, for approximately three days per experimental run. F Details Results of Reinforcement Learning Training Dynamic using general-verifier We provide the details results of the RL training using HuggingFace Verifier and general-verifier as hybrid verifier on DeepscaleR in Figure 8. G Training Details for R1-Distill-Verifier-1.5B To reduce overthinking and encourage more concise, focused outputs, we fine-tune DeepSeek-R1- Distill-Qwen-1.5B using rejection fine-tuning [Yuan et al., 2023]. Specifically, we sample 1K queries from the DeepscaleR dataset (non-overlapping with the evaluation set described in §3.1). For each query, we generate eight candidate responses from DeepSeek-R1-Distill-Qwen-32B and use GPT- 4o [Hurst et al., 2024] as an annotator to assess whether each response aligns with the ground-truth answer. We then sample eight candidate
https://arxiv.org/abs/2505.22203v1
responses from DeepSeek-R1-Distill-Qwen-1.5B. Responses that do not match GPT-4o’s judgment or are duplicates are filtered out, yielding approximately 20K examples for fine-tuning. The model is fully fine-tuned using a learning rate of 1e-4 for 3 epochs. 15 Table 6: Performance of model-based verifier and hybrid verifier across various datasets. Results are reported as Precision/Recall. Evaluations are conducted on a subset of the static evaluation dataset sampled from Deepseek-R1-Distill-Qwen-32B, as described in §3.1. Verifier Math DeepscaleR ORZ-Math Skywork-OR1 HuggingFace Verifier 0.999/0.951 0.995/0.935 0.997/0.953 0.988/0.877 General LLM as Judge Qwen2.5-1.5B 0.993/0.956 0.98/0.951 0.991/0.95 0.952/0.88 ,→+ HF Verifier 0.994/0.974 0.976/0.968 0.983/0.986 0.95/0.92 Qwen2.5-Math-1.5B 0.993/0.957 0.982/0.948 0.982/0.949 0.941/0.899 ,→+ HF Verifier 0.992/0.976 0.982/0.967 0.985/0.982 0.949/0.922 DS-R1-Distill-Qwen-1.5B 0.991/0.78 0.979/0.774 0.982/0.769 0.948/0.721 ,→+ HF Verifier 0.992/0.976 0.986/0.968 0.989/0.979 0.954/0.903 Qwen2.5-7B 0.997/0.954 0.993/0.938 0.997/0.93 0.979/0.88 ,→+ HF Verifier 0.998/0.972 0.993/0.974 0.997/0.982 0.971/0.904 Qwen2.5-Math-7B 0.996/0.953 0.989/0.945 0.995/0.934 0.965/0.881 ,→+ HF Verifier 0.997/0.976 0.989/0.97 0.996/0.986 0.968/0.914 DS-R1-Distill-Qwen-7B 0.994/0.942 0.985/0.927 0.991/0.932 0.967/0.882 ,→+ HF Verifier 0.996/0.977 0.984/0.974 0.991/0.987 0.976/0.919 Trained Verifier R1-Distill-Verifier-1.5B 0.992/0.926 0.977/0.892 0.991/0.939 0.955/0.867 ,→+ HF Verifier 0.992/0.981 0.983/0.973 0.988/0.986 0.959/0.933 xVerify-0.5B-I 0.993/0.976 0.986/0.935 0.99/0.965 0.975/0.887 ,→+ HF Verifier 0.994/0.984 0.988/0.973 0.99/0.989 0.976/0.919 xVerify-3B-Ia 0.997/0.988 0.99/0.932 0.995/0.962 0.989/0.925 ,→+ HF Verifier 0.997/0.996 0.992/0.977 0.996/0.992 0.989/0.958 general-verifier 0.997/0.983 0.991/0.965 0.994/0.98 0.98/0.958 ,→+ HF Verifier 0.997/0.996 0.994/0.987 0.994/0.994 0.98/0.976 H Analysis of Hacking Pattern During RL Training In §5.2, we observe that R1-Distill-Verifier-1.5B becomes compromised during dynamic RL training, leading to a drop in evaluation accuracy and eventual training collapse. And we conduct detailed analysis to the patterns that lead to hacking. As shown in Figure 9 and Figure 10, the policy model exploits vulnerabilities in the verifier during training by outputting either a single simple character (such as “ {” ) or long sequences of meaningless text to bypass the verifier. I Details of Verifier Robustness Probing I.1 Adversarial Patterns Construction As mentioned in §6, to systematically probe the vulnerabilities of verifiers, we construct a new adversarial dataset based on approximately 471 samples from the DeepScaleR dataset. We include 13 distinct hacking pattern types as shown in Table 7. I.2 Detailed Results We present the detailed results of the hacking patterns against verifiers in Table 8 and Table 9. 16 Question: Given acute angles 𝛼and 𝛽satisfy sin𝛼=!!,sin𝛼−𝛽=−"#"#, then 𝛽equals?Ground Truth Answer: $%Predicted Answer: 45°R1-Qwen-1.5B:<think> First, I need to compare the Extracted Answer of 45 with the Ground Truth Answer of π/4. I know that 45 degrees is a common angle in trigonometry. To see how it relates to radians, I'll convert 45 degrees to radians by multiplying by π/180: 45 ×(π/180) = π/4. This shows that both answers represent the same angle but expressed differently—one in degrees and the other in radians. Since they are equivalent representations of the same angle, the Extracted Answer matches the Ground Truth Answer. </think> The Extracted Answer is 45, while the Ground Truth Answer is \( \frac{\pi}{4} \). Step-by-Step Explanation:1. Understand the Units: -The Ground Truth Answer uses radians (\( \pi \) radians are equal to 180 degrees). -The Extracted Answer is given in degrees (45 degrees). 2. Convert Radians to Degrees: -To compare, convert
https://arxiv.org/abs/2505.22203v1