text
string
source
string
2 12 2 1 3 18 57 113 57 66 4 2 0 3 26 1164 39 1 830 13 8 7 8 5 2 1 77 5 1008 485 54 93 65 24 37 5 11 1 370 1 257 15 2 18 8 2 20 1 407 185 4 237 5 9 9 5 23 0 354 53 0 16 138 1 13 1 29 2 352 121 39 23 0 415 24 4 66 23 291 43 0 1 4 2 432 68 11 4 3 6 0 0 0 0 1 501Janus(y) vs Qwen(x) airplane automobilebirdcatdeerdogfroghorseshiptruck871 0 5 0 0 0 0 1 2 0 40 835 1 0 3 1 0 1 17 617 23 0 869 6 8 3 4 1 1 0 15 0 277 1025 227 244 77 10 1 5 1 0 15 3 678 1 0 3 0 0 37 0 30 16 14 764 2 18 3 4 10 0 161 11 7 4 411 0 0 0 4 0 8 0 53 12 0 949 0 2 42 0 5 0 0 2 1 1 931 3 2 0 0 0 0 0 0 0 0 525Janus(y) vs LLaVA(x) airplane automobilebirdcatdeerdogfroghorseshiptruck495 24 60 38 22 17 18 28 53 32 15 91 1 5 1 0 0 1 22 24 420 24 1116 505 538 479 283 319 270 21 18 50 65 410 8 215 32 124 36 17 4 0 9 5 306 6 0 33 1 2 16 0 25 39 19 266 12 25 0 0 9 0 47 16 10 10 125 1 2 0 2 1 10 11 25 10 0 414 2 2 9 7 14 10 21 11 15 20 437 9 29 616 0 1 4 3 0 3 69 1041Qwen(y) vs LLaVA(x) 02004006008001000Figure 5: Models Pairs Comparison. These three sub-figures illustrate confusion matrix of renovation results between VLMs: Janus/Qwen, Janus/LLaV A, Qwen/LLaV A respectively. airplane automobilebirdcatdeerdog froghorseshiptruck01000200030004000500060007000 Qwen Llava Blip Janus Figure 6: Count of VLMs’ output on CIFAR- 10.The label distribution is presented in Figure 6 illus- trating all four models exhibit predictions across all ten CIFAR-10 classes. The figure reveals a no- table observation: VLMs exhibit biased prediction tendencies across different classes. In particular, Qwen shows a strong preference for the label bird and cat, frequently assigning them more than other categories. Janus tends to significantly overpredict horse, while LLaV A and BLIP display more bal- anced label distributions. No significant bias found in these two models. We believe that the presence of these biases may be attributed to hallucinations caused by prior knowledge or learning objectives used during model development. In particular, imbalanced label distributions or domain-specific image-text pairs could reinforce model preferences for certain categories. 4 Equipping VLM with Label-Noise Curation In addition to VLMs, we also incorporate several specialized label diagnosis methods, including Cleanlab and Docta, to support and enhance the final label aggregation process. Cleanlab [ 32] is a model-agnostic framework that identifies and corrects noisy labels by estimating the joint distribution of noisy
https://arxiv.org/abs/2505.16149v1
and true labels using the Confident Learning paradigm. Docta [ 57] systematically audits dataset credibility by estimating label noise and ranking instances based on the alignment between observed and inferred labels without requiring ground-truth annotations. For the results obtained from VLMs and related methods, our aggregation procedure consists of the following processes: 4.1 Constructing Pseudo Ground-Truth Labels via Model Voting For each dataset X, we estimate the accuracy of each method using the first 100 images. Specifically, we collect predictions from mdifferent models {˜y1,˜y2, . . . , ˜ym}, where each ˜yi= [˜y(1) i,˜y(2) i, . . . , ˜y(n) i]denotes the predicted label set by model ion the first n= 100 images. For each image x(j), we apply a voting strategy across the mmodels to generate pseudo ground-truth labels. Specifically, the vote count for a candidate label c∈ Cis defined as: vote(j)(c) =mX i=1Ih c∈˜y(j) ii , where I[·]is the indicator function, equal to 1 if model ipredicts label cfor image x(j), and 0 otherwise. The aggregated pseudo ground-truth label set for image x(j)is defined as: ˜y(j) ground_truth =n c∈ C | vote(j)(c)≥ko , where kis the minimum number of votes required for label cto be included. The ground truth labels yground_truth , manually verified, are used to assess the model’s estimated accuracy. 6 4.2 Model Expertise (Accuracy) Estimation Given the pseudo ground-truth labels ˜yground_truth , we assess the estimated accuracy of each model on this subset. To account for the tendency of VLMs to produce overly broad predictions, we introduce a regularization term that penalizes such behavior during evaluation. Let˜y(j) model denote the predicted label set of a model for image x(j), and let ˜y(j) ground_truth be the corre- sponding pseudo ground-truth label set. We compute model estimated accuracy as: est_acc := Pn j=1 ˜y(j) model∩˜y(j) ground_truth Pn j=1 ˜y(j) ground_truth  · 1−Pn j=1 ˜y(j) model n· |C| . Here,| · |denotes set cardinality, nis the number of images (typically 100), and Cis the complete set of candidate labels. The second term penalizes models that produce excessive predictions, thus reducing the impact of random guessing. This adjustment encourages more precise outputs and improves robustness in accuracy estimation. The estimated accuracies of different models across datasets are summarized in Table 1. Dataset BLIP LLaV A Janus Qwen DOCTA Cleanlab Origin Full Score CIFAR-10 0.777 0.815 0.676 0.650 0.973 0.928 0.975 5.794 CIFAR-100 0.750 0.812 0.650 0.774 0.839 0.671 0.856 4.730 Caltech256 0.927 0.843 0.784 0.879 – 0.808 0.935 5.176 ImageNet 1K 0.858 0.807 0.490 0.793 – 0.490 0.569 4.007 QuickDraw 0.710 0.773 0.430 0.792 – 0.181 0.210 3.096 MNIST 0.592 0.696 0.582 0.822 0.983 0.988 0.991 5.654 Table 1: Estimated accuracy results of different renovation methods across datasets. Origin represents the dataset’s original label estimated accuracy evaluated on pseudo ground truth. Full Score is the sum of all the methods’ accuracies, denoting the theoretical score for the most likely labels. Results reveal a negative correlation between label pool size and estimated accuracy, underscoring the impact of task complexity on model performance. VLMs achieve the highest accuracy on the renovated Caltech256
https://arxiv.org/abs/2505.16149v1
dataset, likely due to its higher-resolution images, and perform well on ImageNet-1K, where Cleanlab underperforms. Conversely, Cleanlab outperforms VLMs on MNIST, indicating that statistical or human-guided methods are more effective for structured data. These findings suggest a complementary relationship between VLMs and traditional renovation methods, contingent on dataset characteristics. 4.3 Weighted Aggregation using Estimated Accuracy Scores Following the estimated accuracy, we perform prediction aggregation using the estimated accuracy scores {Acc 1, . . . , Accm}as model weights, without applying softmax normalization. For each image x(j)and candidate label c∈ C, we compute a weighted support score as: score(j)(c) =mX i=1Acci·Ih c∈˜y(j) ii , where I[·]denotes the indicator function. This score quantifies the level of support for label cacross all models, adjusted by their estimated reliability. 4.4 Post-Aggregation Filtering and Normalization To determine the final label set for each image x(j), we apply a two-step filtering and normalization procedure: •Thresholding and Top- nSelection For each image x(j), we retain labels cthat satisfy both: (i) score(j)(c)≥τ, where τis a predefined threshold; and (ii) cranks among the top nlabels with the highest scores for x(j). LetC(j) filtered denote the resulting label set. 7 •Softmax Normalization We normalize the scores within C(j) filtered via softmax: p(j)(c) =escore(j)(c) P c′∈C(j) filteredescore(j)(c′), c∈ C(j) filtered. Dataset Image number Noisy Label Missing label Threshold/Full score Top- K CIFAR-10 10000 38 1325 0.900 /5.794 3 CIFAR-100 10000 552 6083 0.830 /4.730 5 Caltech256 30607 766 30187 0.037 /5.176 7 Quickdraw 2500 462 2500 0.018 /3.096 5 Imagenet 50000 6546 35147 0.300 /4.007 10 Mnist 10000 24 469 0.950 /5.654 3 Table 2: REVEAL results across datasets. Here, Top-Kdenotes the maximum number of labels allowed per image in the dataset, and the Threshold refers to the minimum weighted support score required for a label to be included in the final result. The definition of Full Score has been introduced in Table 1. Noisy Label andMissing Label represent the number of images identified by REVEAL as containing such issues under the specified configuration. 4.5 Observations Figure 7: Label correction visualization across datasets. Observation 5: Missing labels are another possible way to describe the image content. In datasets with a relatively small number of possible labels, such as CIFAR-10 and MNIST, when multiple missing labels are identified through renovation, this “multiplicity” often reflects uncertainty in label assignment rather than the actual co-occurrence of multiple objects. For instance, as illustrated in a missing-label example from MNIST in Figure 7, VLMs estimate a 54% probability that the digit is a 9 and a 46% probability that it is a 4, rather than suggesting that both digits are simultaneously present in the image. Observation 6: Missing labels also refer to elements present in the image but omitted during annotation. In datasets with a large number of possible labels, such as CIFAR-100 (100 classes) and Caltech256 (257 classes), cases of multiple missing labels identified through renovation are more likely to indicate true multi-object presence. As illustrated in a missing-label example from CIFAR-100 in Figure 7, VLMs predict that, in addition to the original label “forest”, the image
https://arxiv.org/abs/2505.16149v1
also contains “man” and “boy”, suggesting genuine label omissions rather than probabilistic ambiguity. 8 Observation 7: When the number of label candidates is extremely large (e.g., in the case of ImageNet), VLMs tend to produce semantically related sets of labels. In datasets with a large number of possible labels, such as Caltech256 and ImageNet, the occurrence of multiple missing labels during renovation may sometimes be attributed to semantic similarity between certain classes. As illustrated in an example from Caltech256 in Figure 7, VLMs predict that the image could correspond to either “baseball bat” or “bat”. While traditional deep learning classification models typically treat labels as discrete one-hot encodings, VLMs interpret label semantics in a continuous space, making them more prone to confusion between semantically similar classes. 4.6 Comparison with Human-Annotated Renovation We further compare and analyze the renovation results produced by our VLMs and other methods against human annotations collected via MTurk [ 33] across the datasets. We calculated the agreement rates between the renovation results of our four VLM-based methods and the aggregated final results with the MTurk-evaluated datasets (as shown in Table 3). Dataset BLIP LLaV A Janus Qwen REVEAL CIFAR-10 0.806 0.785 0.818 0.704 0.976 CIFAR-100 0.661 0.755 0.556 0.668 0.880 Caltech256 0.850 0.774 0.764 0.836 0.946 ImageNet 1K 0.664 0.560 0.306 0.402 0.569 QuickDraw 0.369 0.421 0.260 0.467 0.630 MNIST 0.435 0.518 0.365 0.506 0.776 Table 3: Agreement rates between REVEAL and MTurk annotations across datasets. Individual VLMs and the aggregated results ( REVEAL ) are evaluated against MTurk labels. Highlighted cells indicate the highest agreement per dataset. Figure 8: Examples comparing REVEAL results with human annotations and single VLM results. With respect to the comparison against MTurk human annotations, we give: Observation 8: High Alignment with Human Annotations Across most datasets, REVEAL method demonstrates a high degree of consistency with MTurk human annotations. Observation 9: Discrepancies Due to Mturk Constraints On the ImageNet and QuickDraw datasets, REVEAL exhibits substantial discrepancies compared to the MTurk annotations, which may arise from limitations in the MTurk evaluation protocol. In particular, annotators are typically limited to validating only two candidate labels per image. For datasets like ImageNet and QuickDraw, which have a large space of plausible labels, this constraint fails to adequately capture missing label issues, thereby reducing the reliability of the MTurk annotations on these datasets. Observation 10: REVEAL Outperforms Individual VLM Across most datasets, REVEAL demonstrates superior alignment with human judgment compared to individual VLMs. 5 Conclusion We address a critical yet underexplored issue in image classification: the presence of noisy and missing labels in widely-used benchmark datasets. We propose a unified framework, REVEAL , that renovates test sets by combining label noise detection and missing label imputation using VLMs. Through model agreement analysis, expertise estimation, and ensembling, we construct high-quality pseudo ground-truths that better reflect image content. Across six datasets: CIFAR-10, CIFAR-100, 9 ImageNet, Caltech256, QuickDraw, and MNIST, alongside comparison with human annotations (MTurk), we find noisy labels and missing labels to be pervasive, and we provide a systematic analysis via 10 observations . Our method effectively reveals
https://arxiv.org/abs/2505.16149v1
missing labels, providing soft-labeled outputs that exhibit a high degree of alignment with human judgments . Our work offers a model- centric alternative for benchmark improvement, and we hope it inspires future efforts in multi-label evaluation, open-vocabulary testing, and human-in-the-loop verification. 10 References [1]Jean-Baptiste Alayrac, Jeff Donahue, Federico Lucarella, and et al. Flamingo: a visual language model for few-shot learning. arXiv preprint arXiv:2204.14198 , 2022. [2]Ehsan Amid, Manfred K Warmuth, Rohan Anil, and Tomer Koren. Robust bi-tempered logistic loss based on Bregman divergences. Advances in Neural Information Processing Systems , 32, 2019. [3]Hossam Awadalla, Jiahui Yu, Jack Clark, and et al. Openflamingo: An open-source framework for multimodal few-shot learning. arXiv preprint arXiv:2308.01390 , 2023. [4]Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. arXiv preprint arXiv:2308.12966 , 2023. [5]Emanuel Ben-Baruch, Tal Ridnik, Itamar Friedman, Avi Ben-Cohen, Nadav Zamir, Asaf Noy, and Lihi Zelnik-Manor. Multi-label classification with partial annotations using class-aware selective loss. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 4764–4772, 2022. [6]Joseph Chee Chang, Saleema Amershi, and Ece Kamar. Revolt: Collaborative crowdsourcing for labeling machine learning datasets. In Proceedings of the 2017 CHI conference on human factors in computing systems , pages 2334–2346, 2017. [7]Pengfei Chen, Ben Ben Liao, Guangyong Chen, and Shengyu Zhang. Understanding and utilizing deep neural networks trained with noisy labels. In International conference on machine learning , pages 1062–1070. PMLR, 2019. [8]Xiaokang Chen, Zhiyu Wu, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, and Chong Ruan. Janus-pro: Unified multimodal understanding and generation with data and model scaling. arXiv preprint arXiv:2501.17811 , 2025. [9]Hao Cheng, Zhaowei Zhu, Xingyu Li, Yifei Gong, Xing Sun, and Yang Liu. Learning with instance-dependent label noise: A sample sieve approach. In International Conference on Learning Representations , 2021. [10] Xue Dai, Junnan Li, Zhengyuan Dai, and et al. Instructblip: Towards general-purpose vision- language models with instruction tuning. arXiv preprint arXiv:2305.06500 , 2023. [11] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large- scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition , pages 248–255. Ieee, 2009. [12] Thibaut Durand, Nazanin Mehrasa, and Greg Mori. Learning a deep convnet for multi-label classification with partial labels. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 647–657, 2019. [13] Jacob Goldberger and Ehud Ben-Reuven. Training deep neural-networks using a noise adapta- tion layer. In International conference on learning representations , 2017. [14] Gregory Griffin, Alex Holub, Pietro Perona, et al. Caltech-256 object category dataset. Technical report, Technical Report 7694, California Institute of Technology Pasadena, 2007. [15] Dan Hendrycks, Mantas Mazeika, Duncan Wilson, and Kevin Gimpel. Using trusted data to train deep networks on labels corrupted by severe noise. Advances in neural information processing systems , 31, 2018. [16] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig.
https://arxiv.org/abs/2505.16149v1
Scaling up visual and vision-language representation learning with noisy text supervision. In International conference on machine learning , pages 4904–4916. PMLR, 2021. 11 [17] Evgeny Krivosheev, Siarhei Bykau, Fabio Casati, and Sunil Prabhakar. Detecting and preventing confused labels in crowdsourced data. Proceedings of the VLDB Endowment , 13(12):2522–2535, 2020. [18] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. [19] Abhishek Kumar and Ehsan Amid. Constrained instance and class reweighting for robust learning under label noise. arXiv preprint arXiv:2111.05428 , 2021. [20] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE , 86(11):2278–2324, 1998. [21] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language- image pre-training for unified vision-language understanding and generation. In International conference on machine learning , pages 12888–12900. PMLR, 2022. [22] Junnan Li, Dongxu Li, Caiming Xiong, and Steven CH Hoi. Blip-2: Bootstrapping language- image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597 , 2023. [23] Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems , 34:9694–9705, 2021. [24] Haotian Liu, Chunyuan Zhang, Yuheng Xu, Shiqi Chang, Jianwei Zhang, Yizhou Wang, and Lu Yuan. Visual instruction tuning. arXiv preprint arXiv:2304.08485 , 2023. [25] Minghao Liu, Zonglin Di, Jiaheng Wei, Zhongruo Wang, Hengxiang Zhang, Ruixuan Xiao, Haoyu Wang, Jinlong Pang, Hao Chen, Ankit Shah, et al. Automatic dataset construction (adc): Sample collection, data curation, and beyond. arXiv preprint arXiv:2408.11338 , 2024. [26] Minghao Liu, Jiaheng Wei, Yang Liu, and James Davis. Human and ai perceptual differences in image classification errors. In Proceedings of the AAAI Conference on Artificial Intelligence , volume 39, pages 14318–14326, 2025. [27] Tongliang Liu and Dacheng Tao. Classification with noisy labels by importance reweighting. IEEE Transactions on pattern analysis and machine intelligence , 38(3):447–461, 2016. [28] Yang Liu and Hongyi Guo. Peer loss functions: Learning from noisy labels without knowing noise rates. In International Conference on Machine Learning , pages 6226–6236. PMLR, 2020. [29] Xingjun Ma, Hanxun Huang, Yisen Wang, Simone Romano, Sarah Erfani, and James Bailey. Normalized loss functions for deep learning with noisy labels. In International Conference on Machine Learning , pages 6543–6553. PMLR, 2020. [30] Zhongchen Ma, Lisha Li, Qirong Mao, and Songcan Chen. Label structure preserv- ing contrastive embedding for multi-label learning with missing labels. arXiv preprint arXiv:2209.01314 , 2022. [31] Nagarajan Natarajan, Inderjit S Dhillon, Pradeep K Ravikumar, and Ambuj Tewari. Learning with noisy labels. In Advances in neural information processing systems , pages 1196–1204, 2013. [32] Curtis Northcutt, Lu Jiang, and Isaac Chuang. Confident learning: Estimating uncertainty in dataset labels. Journal of Artificial Intelligence Research , 70:1373–1411, 2021. [33] Curtis G. Northcutt, Anish Athalye, and Jonas Mueller. Pervasive label errors in test sets destabilize machine learning benchmarks. In Proceedings of the 35th Conference on Neural Information Processing Systems Track on Datasets and Benchmarks , December 2021. [34] Curtis G Northcutt, Anish Athalye, and Jonas Mueller. Pervasive
https://arxiv.org/abs/2505.16149v1
label errors in test sets destabilize machine learning benchmarks. arXiv preprint arXiv:2103.14749 , 2021. 12 [35] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning , pages 8748–8763. PmLR, 2021. [36] Ryutaro Tanno, Ardavan Saeedi, Swami Sankaranarayanan, Daniel C Alexander, and Nathan Silberman. Learning from noisy labels by regularized estimation of annotator confusion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 11244–11253, 2019. [37] Jingkang Wang, Hongyi Guo, Zhaowei Zhu, and Yang Liu. Policy learning using weak supervision. Advances in Neural Information Processing Systems , 34, 2021. [38] Yisen Wang, Xingjun Ma, Zaiyi Chen, Yuan Luo, Jinfeng Yi, and James Bailey. Symmetric cross entropy for robust learning with noisy labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 322–330, 2019. [39] Hongxin Wei, Lei Feng, Xiangyu Chen, and Bo An. Combating noisy labels by agreement: A joint training method with co-regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 13726–13735, 2020. [40] Hongxin Wei, Lue Tao, Renchunzi Xie, and Bo An. Open-set label noise can improve robustness against inherent label noise. Advances in Neural Information Processing Systems , 34, 2021. [41] Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Masashi Sugiyama, and Yang Liu. To smooth or not? when label smoothing meets noisy labels. In International Conference on Machine Learning , pages 23589–23614. PMLR, 2022. [42] Jiaheng Wei and Yang Liu. When optimizing f-divergence is robust with label noise. arXiv preprint arXiv:2011.03687 , 2020. [43] Jiaheng Wei, Harikrishna Narasimhan, Ehsan Amid, Wen-Sheng Chu, Yang Liu, and Ab- hishek Kumar. Distributionally robust post-hoc classifiers under prior shifts. In The Eleventh International Conference on Learning Representations , 2023. [44] Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, and Yang Liu. Learning with noisy labels revisited: A study using real-world human annotations. arXiv preprint arXiv:2110.12088 , 2021. [45] Jiaheng Wei, Zhaowei Zhu, Tianyi Luo, Ehsan Amid, Abhishek Kumar, and Yang Liu. To aggregate or not? learning with separate noisy labels. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining , pages 2523–2535, 2023. [46] Jiaheng Wei, Zhaowei Zhu, Gang Niu, Tongliang Liu, Sijia Liu, Masashi Sugiyama, and Yang Liu. Fairness improves learning from noisily labeled long-tailed data. arXiv preprint arXiv:2303.12291 , 2023. [47] Tong Wei, Hao-Tian Li, ChunShu Li, Jiang-Xin Shi, Yu-Feng Li, and Min-Ling Zhang. Vision- language models are strong noisy label detectors. Advances in Neural Information Processing Systems , 37:58154–58173, 2024. [48] Yixuan Wei, Yue Cao, Zheng Zhang, Houwen Peng, Zhuliang Yao, Zhenda Xie, Han Hu, and Baining Guo. iclip: Bridging image classification and contrastive language-image pre-training for visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages 2776–2786, 2023. [49] Chengyue Wu, Xiaokang Chen, Zhiyu Wu, Yiyang Ma, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, Chong Ruan, and Ping Luo. Janus: Decoupling
https://arxiv.org/abs/2505.16149v1
visual encoding for unified multimodal understanding and generation. arXiv preprint arXiv:2410.13848 , 2024. [50] Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, and Masashi Sugiyama. Are anchor points really indispensable in label-noise learning? Advances in Neural Information Processing Systems , 32, 2019. 13 [51] Lewei Yao, Runhui Huang, Lu Hou, Guansong Lu, Minzhe Niu, Hang Xu, Xiaodan Liang, Zhenguo Li, Xin Jiang, and Chunjing Xu. Filip: Fine-grained interactive language-image pre-training. arXiv preprint arXiv:2111.07783 , 2021. [52] Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Tsang, and Masashi Sugiyama. How does disagreement help generalization against label corruption? In International Conference on Machine Learning , pages 7164–7173. PMLR, 2019. [53] Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. Lit: Zero-shot transfer with locked-image text tuning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 18123–18133, 2022. [54] Wenqiao Zhang, Changshuo Liu, Lingze Zeng, Bengchin Ooi, Siliang Tang, and Yueting Zhuang. Learning in imperfect environment: Multi-label classification with long-tailed distribution and partial labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision , pages 1423–1432, 2023. [55] Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Conditional prompt learning for vision-language models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 16816–16825, 2022. [56] Zhaowei Zhu, Yiwen Song, and Yang Liu. Clusterability as an alternative to anchor points when learning with noisy labels. In International Conference on Machine Learning , pages 12912–12923. PMLR, 2021. [57] Zhaowei Zhu, Jialu Wang, Hao Cheng, and Yang Liu. Unmasking and improving data credibility: A study with datasets for training harmless language models. arXiv preprint arXiv:2311.11202 , 2023. 14 A Broader Impacts and Limitations In this section, we list some of the broader impacts as well as the limitations of REVEAL . A.1 Border Impacts •Vision-language Dataset Renovation REVEAL is not limited to image classification and can be extended to other multi-modal tasks that require strong visual recognition capabilities. Image captioning is a particularly suitable application, as VLMs are explicitly trained to generate descriptive textual outputs from images. However, many existing image captioning datasets suffer from noise and sub-optimal quality, largely due to their collection from uncontrolled web sources. For instance, captions relying solely on proper nouns of tourist attractions fail to convey the actual visual content of the scene. This underscores the need for effective data curation strategies. By applying our weighted voting ensemble approach, REVEAL can produce more accurate, abundant and fine-grained image descriptions, leading to the development of higher-quality benchmarks for training and evaluating multi-modal models. •Biased Dataset Curation via Ensembling The aggregation design in REVEAL enables the correction of biases inherent in origin datasets or individual curation methods. As VLMs are prone to hallucinations and biased output due to their strong priors, employing a weighted voting scheme based on estimated accuracies helps mitigate noise and erroneous outputs from any single method. This corrective effect is empirically observed in our renovated results, demonstrating the robustness of the ensemble approach. •Soft Labels and Extensions
https://arxiv.org/abs/2505.16149v1
REVEAL effectively identifies missing labels and provides soft-labeled outputs accompanied by likelihood estimates, facilitating its generalization to diverse classification tasks. Compared to traditional one-hot encoding approaches, soft-labeling offers a more nuanced representation that integrates both linguistic and visual semantic information, thereby aligning predictions more closely with human perception and enhancing explainability. Such representations are particularly beneficial for semantic analysis tasks across both natural language processing and computer vision domains. For example, soft-labels can naturally extend to object detection scenarios and sentinel analysis, where texts or image segments may correspond to multiple labels with associated likelihoods, improving prediction quality and semantic interpretability. •Understanding Perceptual Gaps Between Models and Humans Beyond the renovation of test sets, REVEAL also provides a valuable lens into human and AI agreement. By comparing its soft labeled outputs against human annotations, the framework can help identify systematic divergences between model reasoning and human judgment, especially in ambiguous or low-quality images. This opens a promising direction for studying the cognitive boundaries of alignment between vision language models and human perception, which is essential for building more transparent, trustworthy, and human-compatible AI systems. A.2 Limitations •Human Disagreement on Ambiguous Images Although we compare REVEAL outputs against human annotations, some images in the datasets are inherently low-quality or visually ambiguous. This leads to variability in human judgment, making it difficult to establish a definitive ground truth and complicating the evaluation of model performance. •Limitations of Fixed Label Sets in Real Conditions In practical scenarios, images may contain objects or concepts that are not included in the predefined label set. As a result, even semantically reasonable predictions can be considered incorrect during evaluation, making it difficult to fully capture model performance in some conditions. B Prompt Evaluation Supplementary Results Beyond label batch size, our experiments also reveal that prompting models to provide a reason for each decision yields higher recall at a fixed batch size. Moreover, leveraging the strong image 15 captioning capabilities of VLMs, by first prompting for a description and then guiding label selection based on that description, further enhances labeling quality. In addition, we observe that VLMs occasionally generate labels that fall outside the predefined label set. To mitigate the potential bias introduced by an alphabetic ordering implicitly learned by the VLMs, the most effective strategy is to shuffle the label list prior to dividing it into batches. Accordingly, our final prompt design incorporates reasoning, image description, and shuffling labels. Prompt Examples <User Prompt> : <Image placeholder >Please follow the instructions with no exceptions. Binary Questioning Is the main object of this image an <label name>? All the answer should be in json format {‘answer’:‘Yes or No’,‘reason’:‘reason of the answers’} Direct Multi-Label Selection You are Given an image and answer all the labels that appear the image from the following options:(<label candidate list>) There may be multiple labels in a single image, please answer at most 3 possible labels and separate them with ‘, ’. If there is no label appearing in the image, please answer None. Please provide the reason of replying these label and review your answer. Please
https://arxiv.org/abs/2505.16149v1
think step by step and provide similar characteristic in details between the label you choose and the image.All the answer should be in format {‘answer’:‘your answer 1,your answer 2’,‘reason’:‘reason of the answers’} Batched Multi-Label Selection Forget you previous answer. Describe the image and choose labels from the candidates:(<label candidate list>) that you think are in the image. Remember your answer should only contain labels from the given candidates! If you think none of them are in the image, please reply None. Please provide a short reason for your choice Before you make the final response, carefully review if your answer ONLY contains labels in the candidates. Your answer should be a json dict: {‘answer’: [your answer list], ‘description’: image description, ‘reason’:your reason for choosing them}. Please don’t reply in other formats. The evaluation results of label batch size using Qwen, Janus and LLava are presented in Table 4, Table 5, Table 6 respectively. Label Batch Size Model Recall Output Length Time(min) 10 Qwen 0.83 13.56 25 20 Qwen 0.83 8.75 10 30 Qwen 0.81 6.64 9 40 Qwen 0.81 5.42 6 50 Qwen 0.74 3.61 4 Table 4: Prompt batch size evaluation results on Qwen Label Batch Size Model Recall Output Length Time(min) 10 Janus 0.89 28.6 66 20 Janus 0.83 22 40 30 Janus 0.70 17.8 35 40 Janus 0.72 13 28 50 Janus 0.61 8 17 100 Janus 0.43 5 22 Table 5: Prompt batch size evaluation results on Janus 16 Label Batch Size Model Recall Output Length Time(min) 10 LLaV A 0.87 28.6 9.82 20 LLaV A 0.81 22 4.97 30 LLaV A 0.77 17.8 4.07 40 LLaV A 0.76 13 4.00 50 LLaV A 0.75 8 3.59 Table 6: Prompt batch size evaluation results on LLaV A C Experimental Setting The experimental setting for individual renovation methods are concluded in Table 7. For renovation, we deploy each of three VLMs (BLIP[ 21], LLaV A[ 24], and Janus[ 49]) via 16 machines with NVIDIA A800 GPU (80GB memory). Our batch size is set to 32 for BLIP as well as LLaV A and 1 for Janus. Renovation of Qwen is carried out by API access and therefore is executed on a CPU (13th Gen Intel(R) Core(TM) i7-13620H (16CPUs)) with 5 processes for parallel processing. Due to limited computational resources, Docta[ 57] can only perform diagnostics on datasets with a relatively small number of label candidates (e.g., CIFAR-10, CIFAR-100, and MNIST). All experiments were conducted on a single machine equipped with one NVIDIA L20 GPU (48GB memory). For detailed configurations, please refer to the official Docta repository: https://github.com/Docta-ai/docta. Dataset Threshold(for BLIP) Top- α(for BLIP) label/prompt(for other VLMs) CIFAR-10 0.15 3 10 CIFAR-100 0.015 5 20 Caltech256 0.006 5 50 ImageNet 1K 0.00015 20 67 QuickDraw 0.004 5 60 Mnist 0.15 3 10 Table 7: Renovation settings across datasets. D Introduction to Methods BLIP. BLIP (Bootstrapping Language-Image Pre-training) [ 21] is a VLM that includes an Image- Text Matching (ITM) function, which measures the compatibility between an image and a given text prompt. To adapt BLIP for multi-class classification, we compute
https://arxiv.org/abs/2505.16149v1
matching scores between each image and all possible class labels (10 for CIFAR-10, 100 for CIFAR-100). The resulting scores are passed through a softmax transformation to yield a probability distribution over the candidate labels. A threshold is then applied to determine which labels are retained. We deploy BLIP-2 locally. BLIP-2[ 22] distinguishes itself through a two-stage pre-training framework that bootstraps supervi- sion from noisy image-text pairs. In the first stage, BLIP uses a captioning model to generate synthetic captions for images, filtering out low-quality pairs. In the second stage, it uses these refined pairs to train both image-text contrastive and matching objectives. This bootstrapped mechanism enables BLIP to learn more accurate alignment between modalities and improves its zero-shot classification performance. Janus. Janus [ 49], developed by DeepSeek-AI, presents a unified multimodal framework that decouples visual encoding for better performance in both understanding and generation tasks, which makes it suitable for complex multimodal applications. We deploy Janus-Pro-7B locally. What makes Janus-Pro unique is its dual-stream architecture, which explicitly separates visual under- standing and generation capabilities[ 8]. Unlike models that fuse modalities early, Janus maintains independent pathways for encoding image and text representations, allowing it to better preserve modality-specific features. This decoupled approach, paired with a shared cross-modal attention mechanism, results in high versatility across a wide range of tasks from classification to multimodal reasoning and instruction following. 17 Qwen. Qwen-VL [ 4] is part of Alibaba Cloud’s Qwen series. Qwen-VL series excels in tasks requiring fine-grained visual understanding and localization, supporting multilingual interactions, and demonstrating strong performance in various vision-language benchmarks. We accessed Qwen- VL-Plus through the API as it is not ideal to be deployed locally. Qwen-VL-Plus is notable for its rich grounding capability, which tightly couples objects in the image with their semantic descriptions. It employs a fine-grained region-query alignment module that enhances its attention to localized visual details, making it particularly adept at tasks involving small objects or dense scenes. Additionally, its multilingual comprehension and instruction-following capability broaden its applicability in global and real-world settings. LLaV A. LLaV A (Large Language and Vision Assistant) [ 24] integrates a vision encoder with the Vicuna language model, leveraging visual instruction tuning to align visual representations with natural language understanding. It has demonstrated strong performance on tasks such as Science QA, exhibiting capabilities comparable to those of multimodal GPT-4. We deploy LLaV A-13B locally. LLaV A’s core strength lies in its visual instruction tuning pipeline, where it is fine-tuned on multi-turn image-text instruction data. This strategy allows LLaV A to follow natural language prompts while reasoning over visual inputs effectively. Its architecture leverages pretrained weights from Vicuna for language and CLIP for vision, merging them through projection and alignment layers that maintain semantic coherence across modalities. This design empowers it to handle reasoning-heavy visual tasks with minimal fine-tuning. Cleanlab. Cleanlab [ 32] is an open-source Python library that implements the Confident Learning (CL) framework—a model-agnostic, data-centric approach for detecting and correcting noisy labels in machine learning datasets. Unlike traditional methods that primarily focus on adjusting model loss functions to handle noisy labels, Cleanlab
https://arxiv.org/abs/2505.16149v1
addresses the root cause by estimating the joint distribution between noisy (observed) and true (latent) labels. This is achieved through three key principles: pruning , to identify and remove noisy labels; counting , to estimate noise rates using calibrated frequency statistics; and ranking , to prioritize training examples based on their likelihood of being clean. CL operates under the class-conditional noise assumption and uses model-predicted probabilities as input. By leveraging a data structure called the confident joint , it robustly estimates the noise transition matrix and identifies mislabeled examples—even under class imbalance or imperfect probability calibration. Cleanlab supports a variety of learning paradigms, including multi-class and multi-label classification, and has demonstrated state-of-the-art performance in identifying real-world noisy labels in benchmark datasets such as CIFAR-10, ImageNet, and Amazon Reviews [33]. Docta. Docta [ 57] is an open-source framework for systematically auditing and improving the credibility of annotated language datasets, particularly in the context of safety alignment for large language models. It addresses the problem of mislabeled or inconsistent annotations that can undermine the reliability of downstream models, especially in tasks such as toxicity classification or safe response generation. The core methodology of Docta is built upon estimating a label noise transition matrix and defining a data credibility metric that quantifies the alignment between noisy labels and their estimated true counterparts. Without requiring access to ground-truth labels, Docta leverages a k-NN label clusterability assumption and consensus-based soft labeling to identify noisy labels. A cosine similarity-based scoring function is then used to rank the likelihood of correctness for each instance, followed by threshold-based filtering to detect corrupted labels. E MTurk Annotation Format Example and Definition of Agreement Rate E.1 MTurk annotation Format Example Taking CIFAR-10 as an example, the logic and format of the MTurk human annotation results[ 33] are presented as follows: 18 MTurk examples on CIFAR-10 id: 20 url: https://labelerrors.com/static/cifar10/20.png given original label : 7 given original label name : horse our guessed label : 5 our guessed label name : dog MTurk Results given : 3 guessed : 0 neither : 0 both : 2 In [33], the MTurk results are based on the Cleanlab framework. In their study, only those instances where Cleanlab identified a disagreement with the original label were submitted to MTurk for human evaluation. As a result, the MTurk outcomes are restricted to four possible categories: (1) given , referring to the original label of the image; (2) guessed , referring to the label suggested by Cleanlab; (3)both , indicating that both the original and Cleanlab-predicted labels are plausible; and (4) neither , indicating that neither is appropriate. While this review protocol facilitates targeted verification of label disagreements, it has a key limitation: it fails to adequately capture missing label errors , especially in scenarios where the number of plausible label candidates is large. E.2 Definition of Agreement Rate Based on the MTurk results, we define an Agreement Rate metric to evaluate the consistency between our method (including individual VLMs and the aggregated REVEAL framework) and human annotations. To quantify the consistency between model predictions and human annotations,
https://arxiv.org/abs/2505.16149v1
we define the Agreement Rate based on four distinct MTurk outcome types. Let Ridenote the set of predicted labels for image iby a given method (e.g., a VLM or REVEAL), and let giandsirepresent the given (i.e., original) and guessed (i.e., Cleanlab-predicted) labels for that image. Agreement is determined under the following conditions: •Case 1 (MTurk = given): Agreement is counted if gi∈Ri. •Case 2 (MTurk = guessed): Agreement is counted if si∈Ri. •Case 3 (MTurk = both): Agreement is counted if both gi∈Riandsi∈Ri. •Case 4 (MTurk = neither): Agreement is counted if gi/∈Riandsi/∈Ri. We apply this agreement rule to five methods: four VLMs, namely BLIP ,LLaV A ,Qwen , and Janus , as well as our aggregated framework REVEAL . The overall agreement rate is defined as the proportion of images for which the prediction satisfies the corresponding MTurk-based agreement condition: Agreement_Rate :=1 NNX i=1I{Image isatisfies the agreement condition } where Nis the total number of evaluated images, and I{·}is the indicator function that returns 1 if the condition holds and 0 otherwise. 19
https://arxiv.org/abs/2505.16149v1
arXiv:2505.16160v3 [cs.CL] 28 May 2025EduBench: A Comprehensive Benchmarking Dataset for Evaluating Large Language Models in Diverse Educational Scenarios Bin Xu∗, Yu Bai∗, Huashan Sun∗, Yiguan Lin* Siming Liu, Xinyue Liang, Yaolin Li, Yang Gao†, Heyan Huang School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China {binxu, yubai, gyang}@bit.edu.cn Abstract As large language models continue to advance, their application in educational contexts re- mains underexplored and under-optimized. In this paper, we address this gap by introducing the first diverse benchmark tailored for educa- tional scenarios, incorporating synthetic data containing 9 major scenarios and over 4,000 distinct educational contexts. To enable com- prehensive assessment, we propose a set of multi-dimensional evaluation metrics that cover 12 critical aspects relevant to both teachers and students. We further apply human annotation to ensure the effectiveness of the model-generated evaluation responses. Additionally, we succeed to train a relatively small-scale model on our constructed dataset and demonstrate that it can achieve performance comparable to state-of- the-art large models (e.g., Deepseek V3, Qwen Max) on the test set. Overall, this work pro- vides a practical foundation for the develop- ment and evaluation of education-oriented lan- guage models. Code and data are released at https://github.com/ybai-nlp/EduBench . 1 Introduction Large Language Models (LLMs) have recently shown remarkable potential in educational contexts, offering capabilities such as problem-solving, in- teractive dialogue, and decision-making (Jia et al., 2021; Rouzegar and Makrehchi, 2024). These abil- ities make LLMs promising tools for tasks ranging from personalized tutoring to educational content generation. However, despite growing interest, re- search on their practical deployment in education remains limited. A key limitation of previous work is their narrow focus on knowledge-intensive tasks (Rein et al., 2023; Team et al., 2025; Cobbe et al., 2021), which fails to reflect the diverse educational scenarios *Equal contribution. †Corresponding author.encountered in real-world settings. These efforts often overlook the complexity introduced by vary- ing roles (e.g., professors, students, psychological counselors), whose needs, goals, and interaction styles differ significantly. More importantly, exist- ing benchmarks (Huang et al., 2024b; Koutcheme et al., 2024b; Yang et al., 2024b) rarely align task interactions with learners’ cognitive levels and scenario-specific objectives , leading to evaluation dimensions that lack pedagogical relevance. In con- trast, our work introduces EduBench , a benchmark designed not only to support educational applica- tions, but to promote the development of robust and goal-aligned evaluation mechanisms that reflect the diversity of modern educational needs. By going beyond purely knowledge-based tasks (Koutcheme et al., 2024a; Ng and Fung, 2024; Huang et al., 2024a), the EduBench re-centers educational AI research on the core values of education—holistic development, personalized support, and context- aware learning—while systematically constructing data that captures the rich landscape of roles, do- mains, and learning scenarios. For constructing the data, we create a diverse dataset considering 9 different education scenarios, including assignment judging, proposing a study plan given the profile of specific students, suggest- ing for psychological health, and etc. We design several educational contexts in each scenario to further facilitate the diversity of the data, such as question difficulties, student grades (e.g., elemen- tary
https://arxiv.org/abs/2505.16160v3
school students, high school students, graduate students, etc.), and different subjects. All the above categories formulate different education contexts with a total number of over 4,000. For these dif- ferent contexts, we create different querying data, resulting in a dataset containing 18,821 data points. In designing our evaluation metrics, we focus on three principal aspects. First, Scenario Adaptation determines if the model’s response appropriately addresses the conditions and constraints defined 1 Teacher Oriented Domains Student Oriented Domains + ( )*( Chinese English Easy Medium Hard + )*( + + )*( + K-12 SubjectsHigher Education Subjects) Short Answer *( + + )Single- ChoiceMulti- Choice 4,019 contexts = Figure 1: The left section presents our 9 educational scenarios, along with their multi-dimensional educational contexts and corresponding metrics. The right section illustrates the results from human evaluation on EduBench. by specific educational scenarios, task instructions, and role-playing requirements. Second, Factual & Reasoning Accuracy scrutinizes both the factual correctness of the information presented and the logical soundness of the reasoning applied. Third, Pedagogical Application examines the response’s adherence to established educational principles and its potential to positively impact students’ learning experiences. Each major aspect contains 4 distinct sub-metrics to evaluate the responses in a finer- grained way. These metrics are systematically allo- cated across nine educational scenarios to ensure comprehensive coverage of diverse task demands. When evaluating the responses of different mod- els, we notice a trade-off between accuracy and cost when comparing human and LLM evaluators. Hence, we chose to investigate the evaluation ca- pabilities of different LLMs by comparing their evaluative responses with those of human judges in a small test set consisting of 198 diverse data points. Our experiments on several LLMs reveal that DeepSeek V3 achieves the best alignment with human annotator. Results of using it to benchmark five different LLMs show that: 1. The models’ understanding of the scoring guidelines remains imperfect. 2. Not all models as evaluator align closely with human ratings; DeepSeek V3 stands out for its consistency, whereas GPT-4o performs relatively weaker. 3. Smaller models in general would perform worse than large models. As smaller language models typically underper- form significantly larger models, we further high-light the utility of our data in boosting the perfor- mance of smaller models. This is achieved through knowledge distillation from powerful larger mod- els, aiming to bridge the performance gap. We implement a multi-source distillation strategy, ex- tracting expertise for each scenario from the LLMs with the highest performance on that scenario. Our results demonstrate that a 7B model can attain per- formance comparable to the state-of-the-art 671B Deepseek V3 model using a dataset size of around 17,000 training samples. We summarize our contribution as follows: 1.We present the first LLM-powered educa- tional benchmark with the largest scalable sce- nario and context collection (4,000+ contexts), along with a 12-dimension evaluation system. 2.We establish a series of findings and will release all the model-generated and human- annotated data that could benefit the LLM re- search community regarding both educational applications and LLM-based evaluations. 3.We show that our data could benefit smaller models to
https://arxiv.org/abs/2505.16160v3
achieve comparable performance with powerful state-of-the-art LLMs. 2 Related Work 2.1 Evaluation Benchmarks of LLMs The evolution of evaluation frameworks has been pivotal in benchmarking general LLM ca- pabilities, yet they remain limited in address- 2 Knowledge Distillation Metrics & Evaluation Data Curation 9 scenarios ModelEducational Scenarios Design 4,000 distinct educational contexts Data Instances +Scenario AdaptationFactual & Reasoning AccuracyPedagogical Application EduBenchMulti-source Expertise from Powerful LLMs 7B LLM LLMsAlign EvaluatorsHuman Judge Query AnswerMulti-dimensional Categorization Powerful 7B LLM Figure 2: An overview of our EduBench. The left part illustrates our data curation procedure. In the middle part, we showcase demonstrations of our three main evaluation principles and our investigation into the alignment of LLMs with the human judge. The right part shows how our data can boost the performance of smaller models. ing domain-specific challenges, especially in ed- ucation. Early benchmarks like GLUE (Wang et al., 2018) and SuperGLUE (Sarlin et al., 2020) established standardized linguistic tasks, while MMLU (Hendrycks et al., 2021) extended eval- uations to cross-disciplinary knowledge. Build- ing upon these, recent initiatives such as AlpacaE- val (Dubois et al., 2024) quantify instruction- following fidelity via pairwise comparisons. Despite these advancements, mainstream bench- marks still emphasize general capabilities over educational applicability. This gap has spurred specialized benchmarks that target educational scenarios, although significant limitations persist. For instance, EduNLP (Huang et al., 2024b) of- fers a modular framework spanning 8 disciplines and 5 downstream tasks, but its pedagogical rel- evance is limited by narrow scenario diversity and reliance on generic metrics. Code repair has emerged as a key educational application of LLMs, particularly in programming pedagogy. Koutcheme et al. (2024a) benchmarks LLMs’ abili- ties in both code repair and explaining errors in natural language across varied contexts and er- ror types. Building on this, Koutcheme et al. (2024b) introduces two realistic code repair sce- narios—classroom tasks and open-source contri- butions—and builds multi-institutional datasets re- flecting distinct coding norms and error patterns. In academic Q&A assessment, persistent limitations also appear. GPQA (Rein et al., 2023) evaluates graduate-level reasoning across three disciplines, while SuperGPQA (Team et al., 2025) broadens this to 285 fields but maintains a rigid format that hampers interactive pedagogy. Language learning adaptations (Huang et al., 2024a) further examine LLMs’ potential to simplify educational texts, us-ing textbook-based content to enhance readability. Despite progress in general and domain-specific benchmarks, their application to education remains limited by narrow scenarios—typically single- subject and fact-based Q&A—overlooking authen- tic contexts like adaptive feedback and collabora- tive learning. 2.2 LLMs for Education Applications With strong problem solving, interactive, and sound judgment capabilities, LLMs hold signifi- cant promise for education, prompting extensive research into their use across diverse scenarios and contexts. For example, LLMs have been extensively used to generate educational content (Jia et al., 2021; Ghanem et al., 2022) in subjects across mathemat- ics (Liyanage and Ranathunga, 2019; Rouzegar and Makrehchi, 2024), computer science (Logacheva et al., 2024; Sarsa et al., 2022; Frankford et al., 2024a), medicine (Berman et al., 2024) and so on. These models enable efficient large-scale content creation while preserving scenario specificity and linguistic coherence, making
https://arxiv.org/abs/2505.16160v3
them valuable tools for instructional design. In parallel, researchers have increasingly em- ployed LLMs as interactive tutors. Such sys- tems guide students through programming con- cepts (Vadaparty et al., 2024; Frankford et al., 2024b), assist with code debugging and explana- tion (MacNeil et al., 2023; Wang et al., 2024c; Yang et al., 2024a), support the acquisition of mathemat- ical skills (Pardos and Bhandari, 2023), help sim- ulate classroom learning processes (Zhang et al., 2025), and even train social communication strate- gies (Yang et al., 2024b). Moreover, in the context of teacher development, some efforts have used 3 LLMs to simulate student behavior or provide in- structional feedback (Markel et al., 2023), thus improving teacher pedagogical strategies through AI-driven interaction. Another active line of work involves the use of LLMs as automated assessment tools. These mod- els have been applied to evaluate essays (Cao et al., 2020; Yang et al., 2020; Kim and Kim, 2024) or mathematical answers (Jiang et al., 2024; Urrutia and Araya, 2023), offering scoring and diagnostic feedback. For example, previous studies have used LLMs to classify algebra errors (Heickal and Lan, 2024) or identify logical errors in student code (Mc- Nichols et al., 2023), providing timely and detailed responses that can support self-directed learning. Despite growing interest in LLMs for education, current implementations often rely on surface-level metrics like ROUGE or accuracy, overlooking es- sential pedagogical aspects such as scaffolding, en- gagement, and long-term learning outcomes. More- over, most studies address isolated tasks—e.g., quiz generation or one-off tutoring—without consider- ing the full instructional cycle. 3 Dataset Collection In this section, we describe the dataset construction methodology used in this paper. Our objective is to create a comprehensive benchmark that reflects realistic and diverse scenarios in the educational domain. The dataset covers both students’ and teachers’ needs across multiple functional capac- ities, incorporating varied levels of difficulty and content modalities. 3.1 Scenario Design We categorize and organize a set of key education- related tasks (Gan et al., 2023; Wang et al., 2024b), each reflecting a distinct dimension of education. Based on these tasks, we define a “scenario” as a representative educational scenario that involves typical user roles, cognitive skill demands, input- output formats, and evaluation criteria. Build- ing on the distinction in typical user roles, we categorize these tasks into Student-Oriented Sce- narios (Ng and Fung, 2024; Team et al., 2025; Koutcheme et al., 2024a) and Teacher-Oriented Scenarios (Zhang et al., 2025; Ghanem et al., 2022; Huang et al., 2024a). This conceptualization en- ables us to structure a wide range of tasks grounded in real-world educational practice. To support both learner assistance and instruc-tional tasks, we categorize domains into two pri- mary types. Student-Oriented Scenarios: Problem Solving (Q&A), Error Correction (EC), Idea Provision (IP), Personalized Learning Support (PLS), Emotional Support (ES). Teacher-Oriented Scenarios: Question Gen- eration (QG), Automatic Grading (AG), Teaching Material Generation (TMG), Personalized Content Creation (PCC). These scenarios simulate key educational roles and scenarios, enabling realistic evaluation of LLM capabilities in classroom, tutoring, and curriculum design settings. A detailed breakdown of each sce- nario is provided in Appendix
https://arxiv.org/abs/2505.16160v3
A. 3.2 Educational Context Design To ensure scenario realism, learner alignment, and meaningful evaluation, we construct diverse educa- tional domain contexts that reflect the conditions under which tasks naturally occur. Each context is designed to capture variation across four primary dimensions: subject taxonomy ,task difficulty , language setting , and question type . This diver- sity enables EduBench to cover a wide range of real-world educational scenarios, from K–12 to postgraduate levels, and from basic recall to com- plex reasoning. Following disciplinary category design of EduNLP (Huang et al., 2024b) and SuperG- PQA (Team et al., 2025), we categorize tasks based on educational stages (e.g., K–12 vs. higher edu- cation) and align them with different cognitive and pedagogical goals. Each task is assigned a diffi- culty level (easy, medium, hard) to reflect expected learner proficiency. EduBench currently supports both Chinese and English tasks to encourage mul- tilingual model evaluation. Question types are spec- ified based on domain functionality: most scenarios adopt common types such as Single Choice ,Mul- tiple Choice , and Short Answer , while specialized scenarios follow scenario-specific definitions. De- tails of all the educational context design can be found in Appendix B. 3.3 Question Generation for Scenario Tasks Building on the structured scenario design, we generate benchmark data using a systematic and scalable pipeline tailored to educational scenarios. First, we organize our coverage across nine major scenarios grounded in an educational competency framework (see Appendix A). Within each scenario, 4 we apply multi-dimensional categorization, includ- ing subject taxonomy, task difficulty, and question type, to define fine-grained task scenarios. For each scenario, we design prompt templates that reflect realistic user intents, then use GPT-4o to generate consistent data instances. For example, in the mathematics domain, a scenario like “Middle School – Short Answer Question” guides the model in producing relevant QA pairs. Further prompt design details and generated samples are available in Appendix E. 4 Evaluation Metric Design Evaluating complex domains such as education is challenging due to multiple participants and mul- tidimensionality. Previous studies have employed large language models (LLMs) for evaluation pur- poses (Koutcheme et al., 2024b; Team et al., 2025; Yang et al., 2024b; Wang et al., 2024a; Ng and Fung, 2024). Building on this foundation, we pro- posed a newly designed, structured set of evalua- tion metrics that include 12 dimensions and reflect both pedagogical goals and scenario-specific expec- tations to enhance the accuracy and interpretability of model evaluation. Our design comprises three core dimensions: Scenario Adaptation, Factual & Reasoning Accuracy, Pedagogical Application. 4.1 Overall Design To enhance the accuracy and interpretability of model evaluation, we design a series of evaluation metrics. By incorporating them into the evaluation prompts, we guide the model to score according to the given metrics and provide detailed justifica- tions during the evaluation process. The approach of introducing metrics or principles in evaluation has been widely validated (Liu et al., 2025; Sharma et al., 2025; Bai et al., 2022), with some being man- ually designed and others generated by the model before or during the evaluation. The formal expres- sion of
https://arxiv.org/abs/2505.16160v3
this process is as follows: Score =1 nnX i=1Evaluator i(x, y,M) where M ∈ { Handcrafted ,Model generated }(1) 4.2 Metric Design We design distinct evaluation metrics based on 3 scenarios, with 4 metrics for each scenario to cover its key aspects, resulting in 12 different metrics.Scenario Adaptation The Scenario Adaptation metric evaluates whether model responses are con- textually appropriate and aligned with educational expectations. It is assessed across four dimensions: 1)Instruction Following & Task Completion , 2) Role & Tone Consistency , 3)Content Relevance & Scope Control , 4)Scenario Element Integration . Factual & Reasoning Accuracy The Factual & Reasoning Accuracy metric assesses the correct- ness of information and the rigor of reasoning in model responses. It includes four sub-metrics: 1) Basic Factual Accuracy , 2)Domain Knowledge Accuracy , 3)Reasoning Process Rigor , 4)Error Identification & Correction Precision . Pedagogical Application This metric evaluates whether responses embody sound educational prin- ciples and effectively support student learning. It consists of the following sub-metrics: 1) Clarity, Simplicity & Inspiration , 2)Motivation, Guidance & Positive Feedback , 3)Personalization, Adapta- tion & Learning Support , 4)Higher-Order Think- ing & Skill Development . The detailed explanations of these metrics are provided in Appendix C. 4.3 Dynamic Metric Allocation Given the diversity of educational tasks covered in EduBench, a one-size-fits-all evaluation approach is insufficient. Not all metrics are equally rele- vant or applicable across the nine distinct scenarios. For example, Emotional Support tasks emphasize contextual empathy and scenario alignment more than factual precision, while scenarios like Prob- lem Solving and Question Generation rely heavily on rigorous reasoning and factual correctness. To address this, we design a flexible evaluation framework that dynamically allocates appropriate metrics based on the specific requirements of each scenario. Each scenario is associated with a tai- lored subset of evaluation dimensions that best re- flect its instructional goals, content characteristics, and target outcomes. This ensures both fairness and relevance in the evaluation process. Detailed metric-scenario mappings and allocation rules are provided in Appendix C.4. 4.4 Human-guided LLM-based Evaluation Evaluating open-ended educational tasks is chal- lenging due to the subjectivity and complexity of responses. While human annotation offers high- quality judgments, it is costly and difficult to scale. 5 On the other hand, relying solely on LLM-based evaluation raises concerns about reliability and con- sistency across diverse scenarios. We adopt a human-guided evaluation framework by constructing a high-quality test set of 198 di- verse examples (11 per scenario in both English and Chinese), annotated by an expert judge across various task types. This dataset serves as a bench- mark to assess the alignment of different LLMs with human evaluation standards. This approach supports scalable and efficient evaluation aligned with human standards. 5 Experiments 5.1 Experiment Settings Response generation We selected 5 repre- sentative models: DeepSeek R1 (Guo et al., 2025), DeepSeek V3 (Liu et al., 2024), Qwen Max, Qwen2.5-14B-Instruct, and Qwen2.5-7B- Instruct (Qwen, 2024). This selection provides a broad view of how models of varying sizes and types, such as standard and reasoning models, han- dle educational tasks. Response evaluation We selected the following LLMs:
https://arxiv.org/abs/2505.16160v3
QwQ Plus (Qwen, 2025), GPT-4o (Ope- nAI et al., 2024), DeepSeek R1 (Guo et al., 2025), and DeepSeek V3 (Liu et al., 2024), as evaluators due to their strong scenario understanding, broad knowledge, and accurate intent recognition. These models assess responses using our defined metrics, guided by dedicated prompts that include language- specific descriptions to avoid multilingual bias. Prompt details are provided in Appendix D. Test set We use 198 test samples (99 Chinese, 99 English), comprising questions from Section 3.3 and responses from five models. For data selection of the test set, we sample all the data points from different educational sub-context to ensure the di- versity and comprehensiveness of our evaluation. We present the evaluation results from the best- performing evaluator model: Deepseek V3, while the complete results of model generation and eval- uation are detailed in Appendix J. 5.2 Evaluation Details As mentioned in Section 4.4, to ensure the ratio- nality and verifiability of the evaluation, we em- ploy both model-based and human-based point- wise evaluations to assess the responses from differ- ent models. Specifically, each QA pair is evaluatedseparately by each evaluation model and by one human annotator. All evaluations are based on the 12 metrics that we have defined in Section 4. During model evaluation, metric information is embedded into the prompt, requiring models to out- put individual metric scores in a single response. In human evaluation, the annotator studies the metrics in advance and adheres strictly to the criteria during annotation. We adopt a point-wise evaluation strat- egy, as preliminary experiments reveal significant positional bias in pair-wise settings (Appendix I.3). The scoring guidelines are detailed in Appendix F. 5.3 Experiment Results Model-evaluation results DeepSeek R1 demon- strates the best overall performance across different metrics, while Qwen2.5-7B-Instruct performs the worst in Table 1. Moreover, DeepSeek R1 performs the best on Higher-Order Thinking & Skill Devel- opment, and Qwen2.5-7B-Instruct is the least sat- isfactory in Error Identification & Correction Pre- cision, with both models showing a clear gap com- pared to others. In specific scenarios, DeepSeek R1 remains the best, while Qwen2.5-7B-Instruct out- performs Qwen2.5-14B-Instruct in scenarios like Emotional Support and Personalized Content Cre- ation in Table 3. This shows the gap between mod- els with smaller sizes is not very large and it drives us to choose the 7B model as the student model during our distillation experiments in Section 6. Human-evaluation results DeepSeek R1 and Qwen2.5-7B-Instruct still demonstrate the best and worst performance respectively in Table 1, con- sistent with the results from model-based evalua- tion. Unlike model evaluation, human annotator shows noticeably lower satisfaction with the perfor- mance of all five models on the Reasoning Process Rigor metric. Qwen2.5-7B-Instruct performs par- ticularly poorly on this metric, scoring only 5.90. In contrast, DeepSeek R1 shows consistently strong performance on the Motivation, Guidance & Posi- tive Feedback metric, even when other models fall short. At the scenario level, DeepSeek R1 remains far ahead in Table 3, while the performance gap between 7B and 14B of Qwen models is relatively small, making the 7B model a cost-effective choice
https://arxiv.org/abs/2505.16160v3
in resource-constrained settings. 6 Evaluator Model BFA CSI CRSC DKA EICP HOTS IFTC MGP PAS RPR RTC SEI Average DeepSeek V3DeepSeek R1 9.51 8.75 9.44 9.45 7.61 8.53 9.47 7.76 9.64 8.85 9.14 9.06 8.93 DeepSeek V3 9.57 8.61 9.25 9.27 7.23 7.98 9.21 7.56 8.94 8.76 9.00 8.59 8.66 Qwen Max 9.38 8.53 9.12 9.23 7.43 7.99 9.16 7.85 9.05 8.57 9.00 8.61 8.66 Qwen2.5-14B-Instruct 9.28 8.50 9.03 9.14 7.14 7.81 8.94 7.55 8.71 8.35 8.82 8.25 8.46 Qwen2.5-7B-Instruct 9.27 8.55 9.08 9.12 6.77 7.86 8.96 7.05 8.95 8.42 8.82 8.53 8.44 HumanDeepSeek R1 8.97 8.60 8.98 8.94 8.86 8.56 8.77 8.20 9.26 7.95 8.91 8.92 8.74 DeepSeek V3 8.77 7.77 8.40 7.89 8.11 7.25 8.10 7.70 7.42 7.03 7.80 7.47 7.89 Qwen Max 8.81 8.01 8.52 8.27 8.23 7.59 8.10 7.70 7.89 7.31 8.09 7.74 8.02 Qwen2.5-14B-Instruct 8.74 7.76 8.26 7.79 7.86 6.88 7.77 6.97 7.02 7.01 7.59 7.03 7.56 Qwen2.5-7B-Instruct 8.49 7.63 8.04 7.82 7.45 6.93 7.65 7.05 7.38 5.90 7.82 7.35 7.46 Table 1: Metric-Level average scores evaluated by DeepSeek V3 and human evaluators under various metrics. For simplicity, we use abbreviations for the metrics. Full names of each metric can be found in Table 2. Abbreviation Full Name IFTC Instruction Following & Task Completion RTC Role & Tone Consistency CRSC Content Relevance & Scope Control SEI Scenario Element Integration BFA Basic Factual Accuracy DKA Domain Knowledge Accuracy RPR Reasoning Process Rigor EICP Error Identification & Correction Precision CSI Clarity, Simplicity & Inspiration MGP Motivation, Guidance & Positive Feedback PAS Personalization, Adaptation & Learning Support HOTS Higher-Order Thinking & Skill Development Table 2: The abbreviations for all of our sub-metrics 5.4 Analysis 5.4.1 Consistency Results Consistency between evaluation models The models exhibit a high degree of consistency in Ta- ble 5, with Kendall’s W values for nearly all models above 0.5, most around 0.6, indicating strong con- sistency. DeepSeek V3 shows the highest consis- tency with other models, and its ranking of average scores for the response models aligns closely with those of other models. Consistency between human and models The overall rankings of the generation models from both human evaluation and model evaluation show similar trends. We also evaluated the Kendall’s Co- efficient of Concordance between different evalua- tors in Table 5. The evaluation scores from models do not align precisely with human judgments which may be attributed to their limited evaluation metric understanding. Overall, DeepSeek V3 exhibits the highest correlation with human evaluations, while GPT-4o shows the lowest. This pattern may be attributed to the relatively larger model sizes and broad training data distribution of DeepSeek V3. 5.4.2 Model Behavior Analysis Model evaluations tend to assign higher scores than human annotator. Our results demonstrate that models assign scores approximately one pointhigher than human’s at both metric and scenario lev- els. In the Q&A scenario, they assign scores above 9 versus human scores of 6–7, creating an almost two-point discrepancy that highlights a significant divergence in evaluation standards. We attribute this gap to two factors: 1) Models may misinterpret scoring criteria, which could be improved through post-training,
https://arxiv.org/abs/2505.16160v3
as current evaluators are not reward models. 2) RLHF training makes models reluctant to give negative feedback. However, we believe post-training could mitigate this issue, and we’ll explore this direction in future work. Larger models typically outperform smaller ones across scenarios. Top models like DeepSeek R1 excel overall, while smaller ones (e.g., Qwen2.5-7B-Instruct) only succeed in limited tasks and lag in complex metrics (Domain Knowledge Accuracy, etc). Notably, for smaller models, size is not definitive – the 7B Qwen2.5 occasionally surpasses its 14B version. 6 Multi-source Distillation Data selection based on EduBench To fully leverage the strengths of different response gen- eration models across various scenario, we adopt a multi-source distillation pipeline. For each task, we select the best-performing model on the test set as the response generator, using it to answer educa- tional domain questions and construct the training dataset for the distillation model, details are shown in Appendix K. Through the distillation pipeline, we obtain a training set of 17,000 samples covering various subtasks across all 9 educational scenarios. Results As shown in Table 4, after distillation, the performance of the 7B model significantly im- proved across 10 of the 12 metrics, achieving per- formance comparable to that of state-of-the-art models. Notably, it outperforms all other models, including DeepSeek R1 and Qwen Max, in terms of the Reasoning Process Rigor metric. 7 Evaluator Model Q&A PLS EC IP AG TMG ES QG PCC Average DeepSeek V3DeepSeek R1 9.49 9.65 9.27 8.75 7.27 9.45 9.38 9.33 9.71 9.14 DeepSeek V3 9.68 9.04 9.14 8.53 7.05 9.34 9.00 9.06 8.92 8.86 Qwen Max 9.18 8.88 9.06 8.52 7.23 9.24 9.04 9.05 9.29 8.83 Qwen2.5-14B-Instruct 9.07 8.72 8.97 8.30 6.77 9.21 8.74 9.02 8.80 8.62 Qwen2.5-7B-Instruct 9.15 9.07 9.01 8.47 6.44 9.21 8.85 8.69 9.00 8.65 HumanDeepSeek R1 7.17 9.11 8.71 8.80 8.42 8.86 9.15 8.79 9.35 8.71 DeepSeek V3 7.45 8.12 8.16 8.17 7.84 7.56 8.08 8.01 7.03 7.82 Qwen Max 7.72 7.94 8.21 8.15 7.89 7.99 7.85 8.39 8.42 8.06 Qwen2.5-14B-Instruct 7.66 7.38 7.92 7.56 7.55 7.84 7.31 7.91 7.36 7.61 Qwen2.5-7B-Instruct 6.78 7.63 7.93 7.74 6.79 7.86 7.79 7.55 7.42 7.50 Table 3: Scenario-Level average scores evaluated by DeepSeek V3 and human evaluator. Max values in each column per evaluator are bolded. Full names of each scenarios can be found in Section 3.1. Model BFA CSI CRSC DKA EICP HOTS IFTC MGP PAS RPR RTC SEI Average DeepSeek R1 9.51 8.75 9.44 9.45 7.61 8.53 9.47 7.76 9.64 8.85 9.14 9.06 8.93 DeepSeek V3 9.57 8.61 9.25 9.27 7.23 7.98 9.21 7.56 8.94 8.76 9.00 8.59 8.66 Qwen Max 9.38 8.53 9.12 9.23 7.43 7.99 9.16 7.85 9.05 8.57 9.00 8.61 8.66 Qwen2.5-14B-Instruct 9.28 8.50 9.03 9.14 7.14 7.81 8.94 7.55 8.71 8.35 8.82 8.25 8.46 Qwen2.5-7B-Instruct 9.27 8.55 9.08 9.12 6.77 7.86 8.96 7.05 8.95 8.42 8.82 8.53 8.44 Distillation Qwen2.5-7B 9.26 8.56 9.27 8.95 6.89 8.43 9.41 7.32 9.56 9.26 9.09 8.95 8.75 Table 4: Performance comparison of our distillation model with other models based on metric-level evaluations by DeepSeek V3. Best results are bold while second best
https://arxiv.org/abs/2505.16160v3
results are underlined. For our distillation model, we use Qwen2.5-7B-Instruct as the base model. For simplicity, we use abbreviations for the metrics. Full names of each metric can be found in Table 2. Model DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 - 0.55 0.61 0.65 0.63 GPT-4o 0.55 - 0.57 0.58 0.56 QwQ-Plus 0.61 0.57 - 0.62 0.63 DeepSeek V3 0.65 0.58 0.62 - 0.63 Human 0.63 0.56 0.63 0.63 - Table 5: Kendall’s Wbetween different evaluation models and human evaluation. 7 Discussion In this section, we discuss the implications for fu- ture research and application development in the intersection of education and artificial intelligence carried by our work. We believe that our work lays a foundational step for LLM-based educational research, offering a comprehensive benchmark and evaluation frame- work that captures the diverse roles, scenarios, and needs present in real-world education. By systematically incorporating scenarios like psycho- logical counseling, assignment grading, and per- sonalized study planning, we bring previously over- looked scenarios into the research landscape, en- couraging deeper exploration across subject areas, learner profiles, and task types. Our benchmark can serve as a springboard for future research in designing better better benchmarks that fulfill more diverse needs ,robust evaluation models , scenario-adapted educational LLMs , and evenLLM agents that can perform multi-role, interac- tive support in classrooms or digital learning en- vironments. Moreover, this work has immediate practical value for educators and institutions , offering structured tools that can help enhance effi- ciency, personalize learning, and reduce workload. The synthetic data construction methods we em- ploy also open up new possibilities for scalable, low-cost training and evaluation , though future work could further improve context richness, re- alism, and dynamic data generation. Ultimately, we hope this work inspires the community to build stronger foundation models for trustworthy and ef- fective educational AI systems. 8 Conclusion In this work, we present the first comprehensive benchmark for evaluating LLMs in diverse educa- tional scenarios. By incorporating data across 9 major domains and over 4,000 distinct educational contexts, it integrates model-generated query data to reflect real-world needs. We further introduce a set of multi-dimensional evaluation metrics span- ning 12 critical aspects, addressing the perspectives of both educators and learners. Human annotations are employed to validate the quality and relevance of model-generated outputs, enhancing the bench- mark’s reliability. Extensive experiments show that smaller models trained on our dataset can rival 8 state-of-the-art LLMs, underscoring the potential for efficient education-oriented LLMs. We believe this benchmark could serve as a valuable resource for the community and inspire further research in optimizing LLMs for educational applications. Limitations This work has several limitations that point to promising directions for future research. First, due to cost constraints, we relied on only one human annotator for evaluation, which may limit the re- liability and generalizability of our findings; ex- panding the annotator pool could improve robust- ness. Second, the set of LLMs we evaluated is relatively limited, and including a wider variety of models would offer a more comprehensive under- standing of system performance. Third, all of our
https://arxiv.org/abs/2505.16160v3
query data was generated by models, which may not fully reflect realistic or diverse user intent. Fu- ture work could benefit from incorporating more human-written queries. Additionally, while our work explores the correlation between human and model evaluations, there is still room to improve alignment. We also employed only basic prompt engineering techniques; more sophisticated prompt- ing strategies or the use of LLM agents may lead to better results. Moreover, most of our evaluation metrics and task scenarios were manually designed, and automating this process could enhance scalabil- ity and consistency. Finally, our methods have not yet been tested in real-world educational environ- ments with practitioners, which will be important for validating practical applicability and impact. Author Contributions •Bin Xu : Conceived and designed the analy- sis; collected the data; performed the analysis; wrote the paper. •Yu Bai : Developed the idea; conceived and designed the analysis; manage the project; per- formed the analysis; wrote the paper. •Huashan Sun : Conceived and designed the analysis; collected the data; performed the analysis; wrote the paper. •Yiguan Lin : Conceived and designed the anal- ysis; collected the data; performed the analy- sis; wrote the paper. •Siming Liu : Conceived and designed the anal- ysis; performed the analysis; wrote the paper;provided training to the annotation company. •Xinyue Liang : Performed the analysis; wrote the paper. •Yaolin Li : Wrote the paper. •Yang Gao : Conceptualization; lead and man- age the project; provided computing resources and funding. •Heyan Huang : Provided computing re- sources. Ethical Consideration In this study, all experiments involving human an- notator comply fully with ethical standards set by a professional annotation company. The annotator participates voluntarily and is fully informed about the experimental procedures and task requirements before starting. Clear guidelines are provided, and the annotator receive sufficient training to ensure consistency and fairness in the annotation process. All data remain anonymized, and the privacy of annotator is strictly protected. This study ensures that no tasks involve potential harm or ethical con- cerns, and all experiments follow relevant ethical guidelines. References Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, and 1 others. 2022. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073 . Jonathan Berman, Lise McCoy, and Troy Camarata. 2024. LLM-Generated Multiple Choice Practice Quizzes for Pre-Clinical Medical Students; Use and Validity. Physiology , 39(S1):376. Yue Cao, Hanqi Jin, Xiaojun Wan, and Zhiwei Yu. 2020. Domain-Adaptive Neural Automated Essay Scoring. InProceedings of the 43rd International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval , pages 1011–1020, Virtual Event China. ACM. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. Preprint , arXiv:2110.14168. 9 Yann Dubois, Percy Liang, and Tatsunori Hashimoto. 2024. Length-controlled alpacaeval: A simple debi- asing of automatic evaluators. In First Conference on Language Modeling . Eduard Frankford, Ingo Höhn, Clemens Sauerwein, and Ruth Breu. 2024a. A Survey Study on
https://arxiv.org/abs/2505.16160v3
the State of the Art of Programming Exercise Generation Using Large Language Models. In 2024 36th International Conference on Software Engineering Education and Training (CSEE&amp;T) , pages 1–5, Würzburg, Ger- many. IEEE. Eduard Frankford, Clemens Sauerwein, Patrick Bass- ner, Stephan Krusche, and Ruth Breu. 2024b. AI- Tutoring in Software Engineering Education. Wensheng Gan, Zhenlian Qi, Jiayang Wu, and Jerry Chun-Wei Lin. 2023. Large Language Models in Education: Vision and Opportunities. arXiv preprint . Issue: arXiv:2311.13160 arXiv:2311.13160 [cs]. Bilal Ghanem, Lauren Lutz Coleman, Julia Rivard Dex- ter, Spencer V on Der Ohe, and Alona Fyshe. 2022. Question Generation for Reading Comprehension Assessment by Modeling How and What to Ask. In Findings of the Association for Computational Lin- guistics: ACL 2022 , pages 2131–2146, Dublin, Ire- land. Association for Computational Linguistics. Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shi- rong Ma, Peiyi Wang, Xiao Bi, and 1 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 . Hasnain Heickal and Andrew Lan. 2024. Generating Feedback-Ladders for Logical Errors in Program- ming using Large Language Models. arXiv preprint . Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring Massive Multitask Language Un- derstanding. In International Conference on Learn- ing Representations . Chieh-Yang Huang, Jing Wei, and Ting-Hao Kenneth Huang. 2024a. Generating Educational Materials with Different Levels of Readability using LLMs. InProceedings of the Third Workshop on Intelligent and Interactive Writing Assistants , pages 16–22, Hon- olulu HI USA. ACM. Zhenya Huang, Yuting Ning, Longhu Qin, Shiwei Tong, Shangzi Xue, Tong Xiao, Xin Lin, Jiayu Liu, Qi Liu, Enhong Chen, and Shijing Wang. 2024b. EduNLP: Towards a Unified and Modularized Li- brary for Educational Resources. arXiv preprint . Is- sue: arXiv:2406.01276 arXiv:2406.01276 [cs]. Xin Jia, Wenjie Zhou, Xu Sun, and Yunfang Wu. 2021. EQG-RACE: Examination-Type Question Genera- tion. Proceedings of the AAAI Conference on Artifi- cial Intelligence , 35(14):13143–13151.Zhuoxuan Jiang, Haoyuan Peng, Shanshan Feng, Fan Li, and Dongsheng Li. 2024. LLMs can Find Mathe- matical Reasoning Mistakes by Pedagogical Chain- of-Thought. arXiv preprint . Seungyoon Kim and Seungone Kim. 2024. Can Lan- guage Models Evaluate Human Written Text? Case Study on Korean Student Writing for Education. arXiv preprint . Charles Koutcheme, Nicola Dainese, and Arto Hellas. 2024a. Using Program Repair as a Proxy for Lan- guage Models’ Feedback Ability in Programming Education. In Proceedings of the 19th Workshop on Innovative Use of NLP for Building Educational Ap- plications (BEA 2024) , pages 165–181, Mexico City, Mexico. Association for Computational Linguistics. Charles Koutcheme, Nicola Dainese, Sami Sarsa, Juho Leinonen, Arto Hellas, and Paul Denny. 2024b. Benchmarking Educational Program Repair. arXiv preprint . Issue: arXiv:2405.05347 arXiv:2405.05347 [cs]. Aixin Liu, Bei Feng, Bing Xue, Bingxuan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, and 1 others. 2024. Deepseek-v3 technical report. arXiv preprint arXiv:2412.19437 . Zijun Liu, Peiyi Wang, Runxin Xu, Shirong Ma, Chong Ruan, Peng Li, Yang Liu, and Yu Wu. 2025. Inference-time scaling for generalist reward model- ing. arXiv preprint arXiv:2504.02495 . Vijini
https://arxiv.org/abs/2505.16160v3
Liyanage and Surangika Ranathunga. 2019. A Multi-language Platform for Generating Algebraic Mathematical Word Problems. In 2019 14th Confer- ence on Industrial and Information Systems (ICIIS) , pages 332–337, Kandy, Sri Lanka. IEEE. Evanfiya Logacheva, Arto Hellas, James Prather, Sami Sarsa, and Juho Leinonen. 2024. Evaluating Contex- tually Personalized Programming Exercises Created with Generative AI. arXiv preprint . Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, and Juho Leinonen. 2023. Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book. In Pro- ceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 , pages 931–937, Toronto ON Canada. ACM. Julia M. Markel, Steven G. Opferman, James A. Landay, and Chris Piech. 2023. GPTeach: Interactive TA Training with GPT-based Students. In Proceedings of the Tenth ACM Conference on Learning @ Scale , pages 226–236, Copenhagen Denmark. ACM. Hunter McNichols, Mengxue Zhang, and Andrew Lan. 2023. Algebra Error Classification with Large Lan- guage Models. arXiv preprint . 10 Chee Ng and Yuen Fung. 2024. Educational Personal- ized Learning Path Planning with Large Language Models. arXiv preprint . Issue: arXiv:2407.11773 arXiv:2407.11773 [cs]. OpenAI, :, Aaron Hurst, Adam Lerer, Adam P. Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, AJ Ostrow, Akila Welihinda, Alan Hayes, Alec Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beutel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, and 401 others. 2024. Gpt-4o system card. arXiv preprint arXiv: 2410.21276 . Zachary A. Pardos and Shreya Bhandari. 2023. Learn- ing gain differences between ChatGPT and human tutor generated algebra hints. arXiv preprint . Team Qwen. 2024. Qwen2.5 technical report. arXiv preprint arXiv:2412.15115 . Team Qwen. 2025. Qwq-32b: Embracing the power of reinforcement learning. David Rein, Betty Li Hou, Asa Cooper Stick- land, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R. Bow- man. 2023. GPQA: A Graduate-Level Google- Proof Q&A Benchmark. arXiv preprint . Issue: arXiv:2311.12022 arXiv:2311.12022 [cs]. Hamdireza Rouzegar and Masoud Makrehchi. 2024. Generative AI for Enhancing Active Learning in Edu- cation: A Comparative Study of GPT-3.5 and GPT-4 in Crafting Customized Test Questions. Paul-Edouard Sarlin, Daniel DeTone, Tomasz Mal- isiewicz, and Andrew Rabinovich. 2020. Superglue: Learning feature matching with graph neural net- works. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages 4938–4947. Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. 2022. Automatic Generation of Program- ming Exercises and Code Explanations using Large Language Models. Mrinank Sharma, Meg Tong, Jesse Mu, Jerry Wei, Jorrit Kruthoff, Scott Goodfriend, Euan Ong, Alwin Peng, Raj Agarwal, Cem Anil, and 1 others. 2025. Con- stitutional classifiers: Defending against universal jailbreaks across thousands of hours of red teaming. arXiv preprint arXiv:2501.18837 . M.-A.-P. Team, Xinrun Du, Yifan Yao, Kaijing Ma, Bingli Wang, Tianyu Zheng, King Zhu, Minghao Liu, Yiming Liang, Xiaolong Jin, Zhenlin Wei, Chujie Zheng, Kaixin Deng, Shawn Gavin, Shian Jia, Sichao Jiang, Yiyan Liao, Rui Li, Qinrui Li, and 78 others. 2025. SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines. arXiv preprint . Issue: arXiv:2502.14739 arXiv:2502.14739 [cs]. Felipe Urrutia and Roberto
https://arxiv.org/abs/2505.16160v3
Araya. 2023. Automati- cally Detecting Incoherent Written Math Answers of Fourth-Graders. Systems , 11(7):353.Annapurna Vadaparty, Daniel Zingaro, David H. Smith, Mounika Padala, Christine Alvarado, Jamie Gorson Benario, and Leo Porter. 2024. CS1-LLM: Integrat- ing LLMs into CS1 Instruction. arXiv preprint . Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP , pages 353–355, Brussels, Belgium. Association for Computational Linguistics. Ruiyi Wang, Stephanie Milani, Jamie C Chiu, Jiayin Zhi, Shaun M Eack, Travis Labrum, Samuel M Mur- phy, Nev Jones, Kate Hardy, Hong Shen, and 1 others. 2024a. Patient- {\Psi}: Using large language mod- els to simulate patients for training mental health professionals. arXiv preprint arXiv:2405.19660 . Shen Wang, Tianlong Xu, Hang Li, Chaoli Zhang, Joleen Liang, Jiliang Tang, Philip S. Yu, and Qing- song Wen. 2024b. Large Language Models for Edu- cation: A Survey and Outlook. arXiv preprint . Issue: arXiv:2403.18105 arXiv:2403.18105 [cs] version: 1. Tianyu Wang, Nianjun Zhou, and Zhixiong Chen. 2024c. Enhancing Computer Programming Education with LLMs: A Study on Effective Prompt Engineering for Python Code Generation. arXiv preprint . Boyang Yang, Haoye Tian, Weiguo Pian, Haoran Yu, Haitao Wang, Jacques Klein, Tegawendé F. Bis- syandé, and Shunfu Jin. 2024a. CREF: An LLM- based Conversational Software Repair Framework for Programming Tutors. arXiv preprint . Diyi Yang, Caleb Ziems, William Held, Omar Shaikh, Michael S. Bernstein, and John Mitchell. 2024b. Social Skill Training with Large Language Mod- els. arXiv preprint . Issue: arXiv:2404.04204 arXiv:2404.04204 [cs]. Ruosong Yang, Jiannong Cao, Zhiyuan Wen, Youzheng Wu, and Xiaodong He. 2020. Enhancing Auto- mated Essay Scoring Performance via Fine-tuning Pre-trained Language Models with Combination of Regression and Ranking. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2020 , pages 1560–1569, Online. Association for Computa- tional Linguistics. Zheyuan Zhang, Daniel Zhang-Li, Jifan Yu, Linlu Gong, Jinchang Zhou, Zhanxin Hao, Jianxiao Jiang, Jie Cao, Huiqin Liu, Zhiyuan Liu, Lei Hou, and Juanzi Li. 2025. Simulating Classroom Education with LLM- Empowered Agents. In Proceedings of the 2025 Con- ference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , pages 10364–10379, Albuquerque, New Mexico. As- sociation for Computational Linguistics. 11 A Detailed Scenario Descriptions This appendix provides comprehensive definitions and explanations of all task scenarios included in EduBench. Each scenario captures a specific educational use case, featuring unique roles, goals, and evaluation considerations. The descriptions are organized into two categories—student-oriented and teacher-oriented—based on the primary user group served by the tasks. A.1 Student-Oriented Scenarios This section outlines scenarios in which AI systems directly assist students in their learning journey. The tasks are designed to reflect authentic student needs, including solving academic problems, receiving feedback on errors, obtaining personalized learning recommendations, and even receiving emotional support. These scenarios emphasize interaction quality, content accuracy, and adaptability to individual learning situations. •Problem Solving: The ability of an AI system to accurately solve questions posed by students
https://arxiv.org/abs/2505.16160v3
across various subjects and difficulty levels. •Error Correction: The capacity to identify and correct student errors in assignments, exams, or daily exercises. Errors can range from obvious mistakes to subtle issues such as variable misuse in code or logical flaws in mathematical reasoning. Evaluation focuses on the accuracy of error detection and the quality of correction. •Idea Provision: This includes answering student queries about knowledge points, homework guid- ance, or exam preparation. It is subdivided into basic factual explanations, step-by-step solution analysis, and general academic advice. Responses are evaluated for accuracy, clarity, and informa- tiveness. •Personalized Learning Support: Based on student profiles (e.g., skill level, learning goals), the system recommends learning paths, exercises, or reading materials tailored to individual needs. Effectiveness is judged by the relevance, difficulty alignment, and usefulness of the recommendations. •Emotional Support: This involves detecting a student’s emotional state (e.g., anxiety before exams) from text and offering appropriate supportive feedback or suggestions. Scenarios include pre-exam stress, post-exam frustration, or social isolation. Evaluation metrics include emotion classification accuracy, specificity of emotional cues, and quality of suggestions. A.2 Teacher-Oriented Scenarios This section focuses on scenarios where AI systems are used to support educators in instructional design, assessment, and personalized teaching. These tasks capture the typical responsibilities of teachers—such as generating exam questions, grading, and preparing learning materials—and evaluate how effectively AI can augment or automate these functions to improve teaching efficiency and quality. •Question Generation: Generating questions based on specified topics, difficulty levels, and knowl- edge scopes. This includes both single-topic and multi-topic (comprehensive) question generation. Advanced requirements involve generating explanations and formatting full exams. Evaluation focuses on question quality, relevance, and structural coherence. •Automatic Grading: Supporting grading of objective questions (e.g., multiple-choice, fill-in-the- blank) and subjective tasks (e.g., project reports) based on scoring rubrics. Feedback generation is also supported. Metrics include scoring accuracy, reasonableness, and feedback informativeness. •Teaching Material Generation: Automatically generating educational content such as slides, teaching plans, and lecture notes. This includes content structuring and supplementing with relevant external materials like images or references. 12 •Personalized Content Creation: Generating differentiated content for students based on their learning levels or personal profiles. This includes both individualized assignments and tiered content design (e.g., differentiated learning objectives, teaching strategies, and assessments for varying student levels). Evaluation focuses on the internal validity of each item and cross-tier consistency. B Educational Context Details Subject Taxonomy: We follow a two-tier classification system to reflect academic breadth: •K–12 Subjects: –Chinese, Mathematics, English, Physics, Chemistry, Biology, History, Geography. •Higher Education Subjects: – Sciences: Mathematics, Physics, Chemistry, Biology, Astronomy –Engineering: Computer Science, Automation Control, Aerospace Science and Technology – Agriculture: Aquaculture, Crop Science – Economics: Applied Economics, Theoretical Economics – Education: General Education, Physical Education – Management: Business Administration, Public Administration – Medicine: Basic Medicine, Clinical Medicine – Social Sciences and Humanities: Sociology, Psychology, History, Law, Management – Literature and Arts: Linguistics, Journalism, Theory of Music – Military Science Task Difficulty Design: Tasks are divided into three levels: •Easy – Basic knowledge and low cognitive load. •Medium – Intermediate tasks requiring moderate understanding. •Hard – Complex problems
https://arxiv.org/abs/2505.16160v3
demanding critical reasoning and expertise. Language: • EduBench currently supports tasks in Chinese and English . Question Types: The question type dimension captures the format of interaction or evaluation expected from the model: •Standard Scenarios (e.g., Problem Solving, Idea Provision, Grading): –Single Choice –Multiple Choice –Short Answer •Emotional Support Scenarios: –Mental Healthy –Mild Anxiety –Moderate Anxiety –Severe Anxiety •Personalized Learning Support & Personalized Content Creation: –No explicit question type; task outputs are tailored recommendations or generated content based on learner profiles. C Evaluation Metric Design Details C.1 Scenario Adaptation Criteria The Scenario Adaptation metric evaluates whether the model’s output aligns with the scenario-specific expectations and pedagogical goals. Below are detailed descriptions of its four sub-components: 13 •Instruction Following & Task Completion: This sub-metric measures the model’s ability to accurately interpret and complete assigned tasks, such as solving problems, correcting errors, or generating questions, while adhering to the required output format and constraints. •Role & Tone Consistency: This dimension evaluates whether the language style, tone, and level of expertise in the response are appropriate for the designated role (e.g., teacher, teaching assistant, peer) and the target learner group (e.g., primary school students, university students). •Content Relevance & Scope Control: The response is assessed for its focus on the specified topic or knowledge area, as well as its ability to stay within the intended difficulty level, subject boundaries, and content scope. •Scenario Element Integration: This sub-metric measures the degree to which the model effectively incorporates scenario-specific information, such as prior student responses, individual learning preferences, or stated pedagogical objectives. This is especially important in personalized learning and interactive tutoring contexts. C.2 Factual & Reasoning Accuracy Criteria This metric evaluates whether a model’s response is grounded in factual correctness and logical rigor, particularly in scenario-intensive or multi-step tasks. It includes the following sub-components: •Basic Factual Accuracy: This sub-metric examines the accuracy of objective information, including definitions, formulas, factual statements, code syntax, and terminology. •Domain Knowledge Accuracy: It assesses the appropriateness and depth of subject-specific knowl- edge presented in the response, ensuring alignment with disciplinary standards across domains such as mathematics, law, and computer science. •Reasoning Process Rigor: This criterion focuses on the completeness and logical validity of the model’s reasoning in tasks that require multi-step derivations, explanations, or justifications. •Error Identification & Correction Precision: In contexts involving diagnostics or feedback, this sub-metric evaluates the model’s ability to accurately detect, localize, and correct errors without introducing false positives or negatives. C.3 Pedagogical Application Criteria This metric evaluates whether a model’s response demonstrates pedagogical effectiveness and contributes meaningfully to learning outcomes. It includes the following sub-components: •Clarity, Simplicity & Inspiration: This sub-metric assesses whether the explanation is articulated clearly and accessibly, using appropriate language to promote understanding and stimulate student interest or engagement. •Motivation, Guidance & Positive Feedback: It evaluates the model’s ability to encourage learners through constructive feedback and supportive guidance, promoting confidence and independent thinking rather than relying on direct answers alone. •Personalization, Adaptation & Learning Support: This criterion measures the response’s ability to adapt based on the learner’s background, proficiency level, and individual needs,
https://arxiv.org/abs/2505.16160v3
including tailored suggestions, scaffolded prompts, and relevant resource recommendations. •Higher-Order Thinking & Skill Development: This sub-metric examines whether the response promotes advanced cognitive skills, such as critical thinking, problem-solving, creative reasoning, and the ability to transfer knowledge to new contexts. 14 C.4 Metric allocation for each scenario To ensure that evaluation is both fair and context-sensitive, we dynamically allocate evaluation metrics based on the instructional characteristics and goals of each scenario. Not all metrics are applicable across all scenarios: for example, reasoning rigor is essential in problem-solving, while emotional and adaptive support is critical in student guidance tasks. The following table summarizes the allocation of the designed metrics (Section 4.3) across the nine scenarios in EduBench: Evaluation Metric Error Correction Idea Provision Grading Answering Questions Material Generation Question Generation Mental Health Personalized Content Creation Learning Support Instruction Following & Task Completion ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ Role & Tone Consistency ✓ ✓ Content Relevance & Scope Control ✓ ✓ ✓ ✓ ✓ ✓ Scenario Element Integration ✓ ✓ ✓ ✓ ✓ Basic Factual Accuracy ✓ ✓ ✓ ✓ ✓ ✓ Domain Knowledge Accuracy ✓ ✓ ✓ Reasoning Process Rigor ✓ ✓ ✓ ✓ Error Identification & Correction Precision ✓ ✓ Clarity, Simplicity & Inspiration ✓ ✓ ✓ ✓ Motivation, Guidance & Positive Feedback ✓ ✓ ✓ Personalization, Adaptation & Learning Support ✓ ✓ ✓ Higher-Order Thinking & Skill Development ✓ ✓ ✓ ✓ Table 6: Allocation of evaluation metrics across nine educational scenarios in EduBench. This allocation ensures that each scenario is evaluated according to the dimensions most critical to its pedagogical purpose. For instance, cognitive rigor is emphasized in analytical tasks (e.g., problem- solving and grading), while adaptive support and contextual integration are prioritized in student-facing personalization tasks (e.g., learning path design or mental health feedback). This scenario-aware evaluation design enhances the interpretability, accuracy, and instructional relevance of the benchmark results. D Evaluation Prompt Design To ensure consistent and fair human evaluation across multiple educational tasks and languages, we carefully design a suite of evaluation prompts aligned with our 12 fine-grained metrics (see Section 4). Each prompt is tailored to elicit targeted human judgment on a specific aspect of model behavior, such as factual accuracy, instruction following, reasoning rigor, or pedagogical impact. Model Evaluation Prompt Design I will provide you with an educational question and its corresponding answer. Please evaluate the given answer based on the provided assessment criteria and scoring principles, and output the score along with the reasons in JSON format. Scoring principles: {principle} Question: {question} Answer: {response} The JSON format is defined as follows: { "detailed_scores": [ { "principle": "principle 1", "score": 0, "reason": "" }, ... ] } E Prompt Design Examples This appendix provides illustrative examples of the prompt templates used during the question data generation process described in Section 3.3. Each example corresponds to a specific task scenario, 15 domain, and target competency, and is designed to ensure the quality, diversity, and controllability of the generated data. The question design and repsonse generaion prompts are as follows: Problem Solving E.1, Error
https://arxiv.org/abs/2505.16160v3
Correction E.2, Idea Provision E.3, Personalized Learning Support E.4, Emotional Support E.5, Question Generation E.6, Automatic Grading E.7, Teaching Material Generation E.8, Personalized Content Creation E.9. E.1 Problem Solving Problem Solving Prompt Design Please freely generate an appropriate question based on the following subject and difficulty level, and provide a standard answer. The question type is {question_type}. If the type is a short-answer question, for certain subjects, provide code and mathematical calculations if necessary. Do not return extra content. Subject: {subject} Difficulty Level: {level} Return in JSON format: "question": "answer": Description: This prompt corresponds to the following capability requirement: -Capability Requirement: Problem-Solving Ability -Basic Description: The ability to solve problems raised by students. -Scenario Design: Answering questions across different subjects. Prompt Design for Obtaining Model Responses in This Scenario {question}. Please generate the corresponding answer based on the question. "Answer": Return in JSON format. E.2 Error Correction Error Correction Prompt Design You are an expert teacher in all subjects, helping students correct errors. Please freely generate a question and an incorrect student answer based on the following subject and difficulty level, and provide corrections. The question type is {question_type}. If the type is a short-answer question, for certain subjects, provide code and mathematical calculations if necessary. Do not return extra content. Subject: {subject} Difficulty Level: {level} Return in JSON format: "question": "original_answer": "corrected_answer": "correction_explanation": Description: This prompt corresponds to the following capability requirement: 16 -Capability Requirement: Error Correction Ability -Basic Description: The ability to identify and correct errors in students’ homework, exams, or daily learning, providing targeted improvement suggestions. -Scenario Design: Basic error correction: Input incorrect solutions, identify errors, and correct them (e.g., code, math). Prompt Design for Obtaining Model Responses in This Scenario {question}{original_answer} You are providing error correction services for students’ answers. Please provide a "Corrected Answer" and "Error Explanation" based on this question and the original answer. "Corrected Answer": "Error Explanation": Return in JSON format. E.3 Idea Provision Idea Provision Prompt Design You are an expert teacher in all subjects, helping students with problem-solving guidance instead of providing standard answers. Please freely generate a question based on the following subject and difficulty level, and provide guidance without giving the answer. The question type is {question_type}. If the type is a short-answer question, for certain subjects, provide code and mathematical calculations if necessary. Do not return extra content. Subject: {subject} Difficulty Level: {level} Return in JSON format: "question": "provided_guidance": Description: This prompt corresponds to the following capability requirement: -Capability Requirement: Q&A Ability -Basic Description: The ability to answer students’ learning questions in real-time, covering knowledge point explanations, homework assistance, exam preparation guidance, etc. -Scenario Design: - Basic knowledge questions: Explanation of knowledge points (accuracy, simplic- ity, inspiration). - Question-solving analysis: Explanation of problem-solving steps, decomposition of complex problems, analysis of involved knowledge points, and summarization of experience. - General: Including study techniques, exam strategies, etc. Prompt Design for Obtaining Model Responses in This Scenario {question}Please provide an approach based on this question. "Provided Approach": Return in JSON format. 17 E.4 Personalized Learning Support Personalized Learning Support Prompt Design You are
https://arxiv.org/abs/2505.16160v3
a personalized service customization expert, providing tailored services to improve learning efficiency. Please freely generate a specific and appropriate student profile based on the following subject and difficulty level, and provide learning path planning suggestions and personalized recommendations. The question type is {question_type}. If the type is a short-answer question, for certain subjects, provide code and mathematical calculations if necessary. Do not return extra content. Subject: {subject} Difficulty Level: {level} Return in JSON format: "student_profile": "learning_path_planning_suggestions": "personalized_recommendations": Description: This prompt corresponds to the following capability requirement: -Capability Requirement: Personalized Service -Basic Description: Based on student profiles, provide customized services to enhance learning efficiency. -Scenario Design: - Learning path planning: Arrange future courses based on current ability levels and learning goals. - Personalized recommendations: Generate or recommend practice exercises and reading materials based on weak knowledge areas and learning habits. Prompt Design for Obtaining Model Responses in This Scenario {student_profile} Based on the student profile, provide "Learning Path Planning" and "Personalized Recommendations", "Learning Path Planning": "Personalized Recommendations": Returned in JSON format. E.5 Emotional Support Emotional Support Prompt Design You are an intelligent assistant capable of identifying students’ emotional states, analyzing the causes of emotional issues, and providing relevant comfort and advice. Please freely generate a multi-turn conversation with a student studying {subject}, identify their emotional state, analyze the causes of emotional issues, and provide relevant comfort and advice. The anxiety level is {anxiety_level}. Do not return extra content. Academic Level: {level} Return in JSON format: "conversation_with_student": "emotional_state_analysis": "comfort_and_advice": 18 Description: This prompt corresponds to the following capability requirement: -Capability Requirement: Psychological Support -Basic Description: The ability to identify students’ emotional states, analyze the causes of emotional issues, and provide relevant comfort and advice. -Scenario Design: - Emotional recognition: Identify emotional states (e.g., normal, mild anxiety, severe anxiety) based on conversations with students. - Comfort and advice: Provide targeted comfort and advice for different emotional issues (e.g., pre-exam anxiety, post-exam frustration, social isolation). - Task scenario: Input: Student’s text description or questions; Output: Emotional state recognition, targeted advice, and possible follow-up action suggestions. - Evaluation metrics: Recognition accuracy, emotional granularity, cause identification, response relevance, and practicality of advice. Prompt Design for Obtaining Model Responses in This Scenario {conversation_with_student}{anxiety_level} Please provide "Emotional State Analysis" and "Comfort & Suggestions" based on the student’s emotional state and conversation. "Emotional State Analysis": "Comfort & Suggestions": Return in JSON format. E.6 Question Generation Question Generation Prompt Design You are an expert teacher in all subjects, generating appropriate questions based on knowledge scope and question types. Please freely generate a knowledge point and corresponding question based on the following subject and difficulty level, provide guidance, and give an answer. The question type is {question_type}. If the type is a short-answer question, for certain subjects, provide code and mathematical calculations if necessary. Do not return extra content. Subject: {subject} Difficulty Level: {level} Return in JSON format: "knowledge_point": "question": "provided_guidance": "answer": Description: This prompt corresponds to the following capability requirement: -Capability Requirement: Question Generation -Basic Description: Generate questions based on knowledge scope and question types, considering difficulty levels. -Scenario Design: - Basic
https://arxiv.org/abs/2505.16160v3
question generation: Generate questions for different difficulty levels, question types, and knowledge scopes. - Comprehensive question generation: Cross-reference multiple knowledge points to generate questions. - Additional requirements: - Provide solutions and step-by-step scoring references for questions. - Compile quizzes or exams based on syllabi to form structured classroom assessments. 19 Prompt Design for Obtaining Model Responses in This Scenario {knowledge_point}{subject}{question_type}{level} Please generate a question based on the subject, academic level, knowledge point, and question type. "Question": Return in JSON format. E.7 Automatic Grading Automatic Grading Prompt Design You need to implement: 1. Objective grading: Grade multiple-choice, true/false, and fill-in-the-blank questions; provide step-by-step scoring for open-ended questions. 2. Subjective grading: Evaluate large assignments and lab reports comprehensively (e.g., workload, completeness, knowledge appli- cation). 3. Personalized feedback: Generate constructive feedback, including potential knowledge gaps and learning suggestions. Please freely generate a question and a student’s answer based on the following subject and difficulty level, and grade the answer. The question type is {question_type}. If the type is a short-answer question, for certain subjects, provide code and mathematical calculations if necessary. Do not return extra content. Subject: {subject} Difficulty Level: {level} Return in JSON format: "question": "student_answer": "grading": "grading_details": "personalized_feedback": Description: This prompt corresponds to the following capability requirement: -Capability Requirement: Automated Homework Grading -Basic Description: Automatically grade students’ homework and analyze results to provide sugges- tions. -Scenario Design: - Objective grading: Grade multiple-choice, true/false, and fill-in-the-blank questions; provide step-by-step scoring for open-ended questions. - Subjective grading: Evaluate large assignments and lab reports comprehensively (e.g., workload, completeness, knowledge application). - Personalized feedback: Generate constructive feedback, including potential knowledge gaps and learning suggestions. Prompt Design for Obtaining Model Responses in This Scenario {question}{student_answer} Please provide "Score", "Scoring Details", and "Personalized Feedback" based on the question and student’s answer. "Score": "Scoring Details": "Personalized Feedback": Return in JSON format. 20 E.8 Teaching Material Generation Teaching Material Generation Prompt Design You are responsible for helping teachers generate high-quality teaching materials, including lesson plans, presentations, and lecture notes. Based on textbook chapters or knowledge points, automatically generate structured lesson plans, including learning objectives, key points, difficult points, and classroom activity designs. The question type is {question_type}. If the type is a short-answer question, for certain subjects, provide code and mathematical calculations if necessary. Do not return extra content. Subject: {subject} Difficulty Level: {level} Return in JSON format: "knowledge_point": "teaching_materials": Description: This prompt corresponds to the following capability requirement: -Capability Requirement: Teaching Material Generation -Basic Description: Generate high-quality teaching materials, including lesson plans, presentations, and lecture notes. -Scenario Design: - Course PPT or lecture note generation: Automatically generate slides or detailed notes based on textbook chapters or knowledge points. - Lesson plan generation: Automatically generate structured lesson plans, including learning objectives, key points, difficult points, and classroom activity designs. - Course-related material generation or retrieval: Generate or search for relevant materials, such as images, teaching cases, links, and references. Prompt Design for Obtaining Model Responses in This Scenario {knowledge_point} Please provide "Teaching Material" based on this knowledge point. The teaching material should include teaching objectives, key points and difficulties, classroom activity
https://arxiv.org/abs/2505.16160v3
design, etc. "Teaching Material": Return in JSON format. E.9 Personalized Content Creation Personalized Content Creation Prompt Design You are an intelligent assistant capable of generating personalized learning content or tasks based on individual student differences. Please freely generate a student profile for a student studying {subject}, and consider the following three aspects: 1. One-on-one: Customize practice questions or reading materials based on specific student profiles. 2. Tiered teaching: For the same course content, generate different teaching objectives, methods, assessment methods, and homework assignments for students at different levels. 3. Other: Combine other capability requirements to design differentiated ability evaluation data for individual students, study groups, and classes. Do not return extra content. Academic Level: {level} Strictly return in JSON format: "student_profile": "personalized_learning_content_or_tasks": 21 Description: This prompt corresponds to the following capability requirement: -Capability Requirement: Personalized Content Generation -Basic Description: Generate personalized learning content or tasks based on individual student differences. -Scenario Design: - One-on-one customization: Based on specific student profiles, generate tailored practice questions or reading materials. - Tiered teaching: For the same course content, generate different teaching objectives, methods, assessment methods, and homework assignments for stu- dents at different levels. - Other considerations: Combine other capability requirements to design differentiated ability evaluation data for individual students, study groups, and classes. Prompt Design for Obtaining Model Responses in This Scenario {student_profile} Based on the student profile, generate a ’personalized learning content or task’ for each student. "personalized learning content or task": Return in JSON format. F Comprehensive Evaluation Metric Scoring Details To systematically evaluate the quality of AI-generated responses in educational settings, we designed a comprehensive human evaluation rubric comprising three main dimensions, each containing several fine-grained criteria. Each criterion is rated on a 10-point scale with clearly defined level anchors to guide consistent judgment. Details are listed as follows: F.1 Instructional Quality F.1.1 Instruction Following & Task Completion (IFTC) Description: Did it fully understand and execute the user’s instruction? Was the core task (e.g., solving problems, error correction, question generation) completed? Is the output formatting correct? •9-10: Fully understood and precisely executed all instructions; achieved core task with perfect accuracy; output format is fully compliant. •7-8: Accurately understood main instructions and correctly completed the task; core goals are well achieved; format is mostly correct with only minor omissions or deviations. •5-6: Understood the general intent but may miss some details; task largely completed but with some inaccuracies or omissions; formatting attempts present but with notable flaws. •3-4: Misunderstood part of the instruction; low task completion or major errors; formatting mostly incorrect. •1-2: Completely misunderstood or ignored instructions; task not completed or totally incorrect; formatting is chaotic or irrelevant. F.1.2 Role & Tone Consistency (RTC) Description: Does the language style, tone, and level of professionalism match the assigned role (e.g., teacher, teaching assistant, peer) and the target learner group (e.g., elementary, college)? •9-10: Excellent role-playing (e.g., teacher/TA); language style, professionalism, and tone (e.g., encouraging/serious) are perfectly aligned with the assumed role and audience. •7-8: Role and tone are mostly consistent and appropriate for the scenario, with minor deviation in individual expressions. 22
https://arxiv.org/abs/2505.16160v3
•5-6: Attempts to match the role and tone can be seen, but overall consistency is weak; some expressions are disconnected from the role/scenario. •3-4: Significant mismatch in role and tone; comes across as unnatural or inconsistent. •1-2: No reflection of assigned role/tone; expression entirely inconsistent with the scenario. F.1.3 Content Relevance & Scope Control (CRSC) Description: Is the content tightly aligned with the specified topic, theme, or question? Is it kept within the specified difficulty level, scenario, or scope? •9-10: Content is highly relevant to the specified topic/theme/question; strictly within required difficulty/scope/discipline without redundant or irrelevant information. •7-8: Overall relevance is high; scope control is good with possibly a small amount of slightly off-topic or mildly overreaching information. •5-6: Mostly relevant, but includes some off-topic or out-of-scope content; scope control needs improvement. •3-4: Poor relevance; includes a significant amount of irrelevant information or is largely outside scope. •1-2: Content is largely irrelevant or completely outside the specified scope. F.1.4 Scenario Element Integration (SEI) Description: Did it effectively use scenario-specific information (e.g., previous student answers, learning preferences, specific teaching goals)? Especially important in personalized, Q&A, or error-correction contexts. •9-10: Fully integrated all key scenario elements (e.g., student history, learning preferences); output is highly personalized and well-matched to the teaching context. •7-8: Used major scenario elements effectively; response is targeted, possibly overlooks minor details but does not affect overall results. •5-6: Some use of scenario information, but integration is shallow; personalization or contextual fit is average. •3-4: Only surface-level reference to scenario information; did not integrate core elements effectively; weak contextual connection. •1-2: Completely ignored scenario-specific information; output is generic, templated, and irrelevant to the scenario. F.2 Content Accuracy F.2.1 Basic Factual Accuracy (BFA) Description: Are objective facts such as concept definitions, formulas, dates, terminology, code syntax, legal clauses correctly presented? •9-10: All stated factual elements (definitions, formulas, dates, terms, syntax, etc.) are completely accurate. •7-8: Vast majority of facts are correct; possibly contains very minor, non-critical typos or omissions. •5-6: Most facts are correct, but there are some notable factual errors that require review. •3-4: Contains several or key factual inaccuracies; information is not trustworthy. •1-2: Riddled with factual errors; information is completely incorrect or misleading. 23 F.2.2 Domain Knowledge Accuracy (DKA) Description: Is the use of subject matter knowledge (math, programming, law, finance, etc.) not only correct but also appropriately specialized and aligned with domain standards? •9-10: Subject matter application is not only accurate but also shows appropriate depth and rigor; adheres to industry or academic standards. •7-8: Proper use of professional knowledge reflecting a good degree of proficiency; minor shortcom- ings in depth or detail not affecting validity. •5-6: Basic accuracy in subject knowledge, but somewhat surface-level or lacking rigor; some confusion or omissions of non-core concepts. •3-4: Significant errors or major omission in subject-specific content; lacks professionalism. •1-2: Serious domain errors; completely incorrect or misleading; does not meet any professional standards. F.2.3 Reasoning Process Rigor (RPR) Description: For content requiring reasoning (e.g., math steps, code logic, legal arguments, case analysis), is the logical flow complete and sound? •9-10: Reasoning is
https://arxiv.org/abs/2505.16160v3
complete, clear, and rigorous; all steps are correct; arguments are strong and free of logical fallacies. •7-8: Reasoning is largely correct and logically coherent with minor issues in individual steps or details that do not affect the conclusion. •5-6: Reasoning is visible but contains unclear logic, missing steps, or insufficient argumentation, affecting the overall outcome. •3-4: Reasoning has major logical flaws, confusion in steps, or critical omissions; reliability is low. •1-2: Virtually no valid reasoning; logic is chaotic; steps are incorrect or irrelevant. F.2.4 Error Identification & Correction Precision (EICP) Description: In error correction scenarios, are errors precisely identified (no missed or false positives)? Are the corrections correct and optimal? •9-10: Precisely identified all errors (no omission or false positives); provided completely correct, clear, and optimal correction suggestions. •7-8: Correctly located most major errors; suggestions are generally accurate and effective with only minor omissions or less-than-perfect advice. •5-6: Identified some errors but with clear omissions or false positives; suggestions are partially correct but may lack clarity, completeness or optimality. •3-4: Inaccurate error detection with critical omissions or many false positives; suggestions contain errors or are hard to comprehend. •1-2: Completely failed to detect errors; provided entirely incorrect or misleading correction advice. 24 F.3 Pedagogical Effectiveness F.3.1 Clarity, Simplicity & Inspiration (CSI) Description: Are explanations, descriptions, and feedback clear, concise, and easy for the target learners to understand? Is the delivery inspiring and thought-provoking? •9-10: Extremely clear and concise explanations; fully accessible for target learners; vibrant and engaging delivery that inspires deep thought and interest. •7-8: Clear and easy to understand; appropriate for learner level; somewhat thought-provoking and can trigger reflection. •5-6: Generally understandable but may be wordy, complex, or dull; limited inspirational impact. •3-4: Lacks clarity; uses excessive jargon or complex structures; difficult to comprehend; uninspiring. •1-2: Confusing and hard to follow; disregards learner needs; offers no inspiration and may cause confusion. F.3.2 Motivation, Guidance & Positive Feedback (MGP) Description: Does the interaction provide encouragement and support? Is constructive and positive language used? In answering or tutoring, does it guide thinking or just give away answers? •9-10: Strongly supportive and encouraging; consistently uses constructive and positive language; offers highly effective heuristic guidance instead of simply giving answers. •7-8: Generally supportive tone and positive language; provides useful guidance though occasionally too direct. •5-6: A mix of encouragement and neutral/critical language; guidance is inconsistent—sometimes helpful, sometimes overly direct or lacking. •3-4: Lacks encouragement and support; language is neutral or mildly negative; rarely guides, often just answers or remains unhelpful. •1-2: Negative or discouraging tone; no motivation or support; fails to guide or gives misleading suggestions. F.3.3 Personalization, Adaptation & Learning Support (PAS) Description: Can it provide differentiated content, advice, or feedback based on a student’s level, traits, or needs? Does it recommend effective learning paths or resources? •9-10: Highly personalized content/advice/feedback based on student level/traits/needs; resource and learning path suggestions are accurate, practical, and valuable. •7-8: Demonstrates some adaptation to student situation; provides relevant learning advice or resources with good utility. •5-6: Attempts personalization but with limited effectiveness; recommendations are generic and of limited
https://arxiv.org/abs/2505.16160v3
value. •3-4: Little to no personalization; output is the same for everyone; learning support is insufficient or unrelated. •1-2: No personalization; output may conflict with student needs; offers no or incorrect learning support. 25 F.3.4 Higher-Order Thinking & Skill Development (HOTS) Description: Does the interaction or content help foster students’ critical thinking, creativity, problem- solving, or knowledge transfer skills? •9-10: Skillfully designed to promote critical/creative thinking, problem-solving, or transfer of knowledge (e.g., through open-ended questions, comparative analysis, case study, project-based tasks). •7-8: Includes guiding questions or moderately challenging tasks that positively support the develop- ment of higher-order thinking (e.g., analysis, evaluation, application). •5-6: Some attempt to encourage higher-order thinking (e.g., simple reflective questions), but limited in depth and scope; mainly focused on rote understanding or basic application. •3-4: Interaction/content mostly revolves around memory and comprehension; rarely addresses higher-order thinking tasks. •1-2: Completely ignores higher-order skill development; encourages rote memorization and repeti- tion; may inhibit thinking flexibility. G Human Annotator Cost The cost for each QA pair is $2.22. We provided 198 questions (99 in English, 99 in Chinese) with 5 responses per question, totaling 990 QA pairs. The final cost is approximately $2,194. H Detailed Correlation Analysis Between Model Evaluation and Human Assessment across Dimensions Model Instruction Following & Task Completion Role & Tone Consistency DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 - 0.56 0.66 0.69 0.63 - 0.48 0.55 0.65 0.71 GPT-4o 0.56 - 0.55 0.52 0.53 0.48 - 0.55 0.56 0.53 QwQ-Plus 0.66 0.55 - 0.63 0.58 0.55 0.55 - 0.58 0.59 DeepSeek V3 0.69 0.52 0.63 - 0.65 0.65 0.56 0.58 - 0.7 Human 0.63 0.53 0.58 0.65 - 0.71 0.53 0.59 0.7 - Content Relevance & Scope Control Scenario Element Integration DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 - 0.57 0.61 0.65 0.61 - 0.55 0.59 0.7 0.66 GPT-4o 0.57 - 0.6 0.6 0.55 0.55 - 0.53 0.56 0.54 QwQ-Plus 0.61 0.6 - 0.62 0.64 0.59 0.53 - 0.61 0.63 DeepSeek V3 0.65 0.6 0.62 - 0.61 0.7 0.56 0.61 - 0.7 Human 0.61 0.55 0.64 0.61 - 0.66 0.54 0.63 0.7 - Table 7: Kendall’s Wbetween different evaluation models and human evaluation in Instructional Quality. I Pre-Experiments I.1 Existence of self-preference Given the potential influence of both the response generation model and the evaluation model, it is crucial to verify whether a model tends to favor its own responses. To examine this, we use each of the three response-generation models as evaluators to assess the responses they themselves generated. Specifically, for each question, we construct pairwise comparisons among the responses generated by the three models. The evaluator model is then asked to select the better response from each pair. By counting the number of times each model’s output is preferred and comparing the distribution of win rates across different evaluators, we assess whether models exhibit self-preference biases. As shown in the Table 10, the overall trends are consistent across the three different evaluators, with no substantial differences observed in
https://arxiv.org/abs/2505.16160v3
the exact win rates. Notably, none of the evaluators exhibit a strong 26 Model Basic Factual Accuracy Domain Knowledge Accurac DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 - 0.51 0.66 0.68 0.59 - 0.6 0.59 0.58 0.57 GPT-4o 0.51 - 0.57 0.56 0.59 0.6 - 0.59 0.62 0.56 QwQ-Plus 0.66 0.57 - 0.62 0.63 0.59 0.59 - 0.64 0.64 DeepSeek V3 0.68 0.56 0.62 - 0.54 0.58 0.62 0.64 - 0.54 Human 0.59 0.59 0.63 0.54 - 0.57 0.56 0.64 0.54 - Reasoning Process Rigor Error Identification & Correction Precision DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 - 0.55 0.59 0.69 0.62 - 0.53 0.56 0.54 0.52 GPT-4o 0.55 - 0.57 0.6 0.55 0.53 - 0.55 0.58 0.59 QwQ-Plus 0.59 0.57 - 0.57 0.64 0.56 0.55 - 0.68 0.67 DeepSeek V3 0.69 0.6 0.57 - 0.65 0.54 0.58 0.68 - 0.66 Human 0.62 0.55 0.64 0.65 - 0.52 0.59 0.67 0.66 - Table 8: Kendall’s Wbetween different evaluation models and human evaluation in Content Accuracy. Model Clarity, Simplicity & Inspiration Motivation, Guidance & Positive Feedback DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 - 0.56 0.56 0.54 0.57 - 0.54 0.58 0.59 0.67 GPT-4o 0.56 - 0.57 0.52 0.51 0.54 - 0.58 0.57 0.58 QwQ-Plus 0.56 0.57 - 0.56 0.56 0.58 0.58 - 0.53 0.61 DeepSeek V3 0.54 0.52 0.56 - 0.56 0.59 0.57 0.53 - 0.54 Human 0.57 0.51 0.56 0.56 - 0.67 0.58 0.61 0.54 - Personalization, Adaptation & Learning Support Higher-Order Thinking & Skill Development DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 GPT-4o QwQ-Plus DeepSeek V3 Human DeepSeek R1 - 0.61 0.71 0.74 0.74 - 0.58 0.69 0.69 0.68 GPT-4o 0.61 - 0.56 0.62 0.59 0.58 - 0.62 0.6 0.6 QwQ-Plus 0.71 0.56 - 0.71 0.71 0.69 0.62 - 0.68 0.66 DeepSeek V3 0.74 0.62 0.71 - 0.72 0.69 0.6 0.68 - 0.68 Human 0.74 0.59 0.71 0.72 - 0.68 0.6 0.66 0.68 - Table 9: Kendall’s Wbetween different evaluation models and human evaluation in Pedagogical Effectiveness. preference for the responses they themselves generated. These results suggest that model self-preference is not a significant concern, thereby granting us greater flexibility in the choice of evaluation models. I.2 Discrimination between normal model and reasoning model After ruling out self-preference biases, we further investigate whether models of different types—such as reasoning models and normal models—exhibit biases toward each other. To this end, we design an experiment in which two external models, QwQ (a reasoning model) and GPT-4o (a normal model), are used to evaluate the responses generated by DeepSeek V3 and DeepSeek R1. The evaluation follows the same pairwise comparison protocol as in the self-preference setting: for each question, the evaluator selects the better response from a pair. By comparing the evaluation outcomes across the two evaluators, we aim to assess potential inter-model biases. As shown in Table 11, the reasoning model exhibits a clear preference for responses generated by other reasoning models,
https://arxiv.org/abs/2505.16160v3
whereas the general-purpose model demonstrates a more balanced evaluation. Notably, the reasoning model’s bias is substantial, with a win ratio as skewed as 9 to 1 in favor of reasoning models. These results highlight the importance of incorporating both reasoning and normal evaluators in the assessment process to mitigate evaluation bias. Relying on only one type of model may lead to unfair or distorted conclusions. I.3 Discrimination between different positions When using pairwise evaluation, a natural concern arises as to whether evaluators might exhibit positional bias—i.e., favoring responses based on their order of presentation. To control for this factor, we randomly sample 200 instances from the evaluation dataset and present them to the evaluator in both the original and reversed order. For each comparison, we record the position of the selected response (e.g., incrementing the Former count if the first response is chosen, and Latter if the second is chosen). This allows us to 27 Evaluator Win times Qwen Max Qwen Max(68) <DeepSeek V3(101) <DeepSeek R1(193) DeepSeek V3 Qwen Max(82) <DeepSeek V3(86) <DeepSeek R1(196) DeepSeek R1 Qwen Max(68) <DeepSeek V3(110) <DeepSeek R1(186) Table 10: To investigate whether models exhibit a preference for their own generations, we employ each data- generating models as evaluators to assess the datasets they produced. Specifically, we construct pairwise comparisons by selecting answers generated by two different models at a time. The evaluator is then asked to choose the better response from each pair. By aggregating the number of times each model’s outputs are preferred, we examine potential self-preference biases. Evaluator Gen_Models Win_times NormalNormal 442 Reasoning 399 ReasoningReasoning 914 Normal 100 Table 11: Comparison of evaluation results between different model types. assess whether response position systematically influences evaluation outcomes. Evaluator Order Former_win Latter_win GPT-4oNormal 873 117 Reverse 742 248 QwQNormal 605 385 Reverse 387 603 Table 12: The table presents the evaluation results after reversing the order of the responses, allowing us to examine whether the evaluator exhibits any positional bias. The experimental results reveal that GPT-4o exhibits a notable positional bias, with the former response being selected significantly more often than the latter, even after the order is reversed. As shown in Table 12, when the evaluated responses are identical aside from their order, the former response is chosen at a disproportionately high rate—by a factor of 3 to 7—compared to the latter. In contrast, QwQ demonstrates a more balanced evaluation, with selection counts remaining consistent before and after the reversal, indicating minimal positional bias. J Extra Results We present the additional experimental results in this section, including the evaluation results of the five response models by the three additional evaluators: GPT-4o, DeepSeek R1, and QwQ-Plus. The evaluation is conducted across two dimensions: metric-level and task-level assessments. The task-level score is the average score of the metrics under each task. The detailed results can be found in Table 15 and Table 14. K Distillation Training Setting After obtaining human and model evaluation results on sample data, we can optimize the selection of model-generated data based on evaluation scores to maximize data quality and efficiency. We propose two
https://arxiv.org/abs/2505.16160v3
data selection strategies. The first strategy involves selecting the best generation model within each scenario. The specific process includes calculating the average scores of all evaluation metrics 28 Scenario Category Dimensions All Data For Training Chinese English Total Chinese English Total Problem Solving Duration*Difficulty*Subject 1,306 1,328 2,634 1,272 1,284 2,556 Error Correction Duration*Difficulty*Subject 620 1,350 1,970 603 1,334 1,937 Idea Provision Duration*Difficulty*Subject 1,342 1,350 2,692 1,300 1,322 2,622 Personalized Learning Support Duration*Subject 348 561 909 67 435 502 Emotional Support Duration*Anxiety Level 1,344 1,074 2,418 1,331 1,059 2,390 Question Generation Duration*Difficulty*Subject 1,358 1,338 2,696 1,331 1,322 2,653 Automatic Grading Duration*Difficulty*Subject 931 1,073 2,004 912 1,058 1,970 Teaching Material Generation Duration*Difficulty*Subject 1,324 1,347 2,671 1,306 1,255 2,561 Personalized Content Creation Duration*Subject 568 259 827 557 235 792 Total 9,141 9,680 18,821 8,679 9,304 17,983 Table 13: The number of scenarios in the dataset and the specific quantities for each scenario. Evaluator Model Q&A PLS EC IP AG TMG ES QG PCC Average DeepSeek R1DeepSeek R1 9.81 9.83 9.05 9.11 7.74 9.46 9.71 9.22 9.73 9.29 DeepSeek V3 9.67 9.12 8.97 8.82 8.32 9.31 9.34 8.65 9.23 9.05 Qwen Max 9.07 9.11 8.86 8.84 7.99 9.15 9.40 8.89 9.29 8.96 Qwen2.5-14B-Instruct 8.94 8.79 8.68 8.23 7.83 9.06 8.52 8.35 8.80 8.58 Qwen2.5-7B-Instruct 8.34 9.01 8.64 8.16 6.64 9.33 8.75 8.23 9.06 8.46 DeepSeek V3DeepSeek R1 9.49 9.65 9.27 8.75 7.27 9.45 9.38 9.33 9.71 9.14 DeepSeek V3 9.68 9.04 9.14 8.53 7.05 9.34 9.00 9.06 8.92 8.86 Qwen Max 9.18 8.88 9.06 8.52 7.23 9.24 9.04 9.05 9.29 8.83 Qwen2.5-14B-Instruct 9.07 8.72 8.97 8.30 6.77 9.21 8.74 9.02 8.80 8.62 Qwen2.5-7B-Instruct 9.15 9.07 9.01 8.47 6.44 9.21 8.85 8.69 9.00 8.65 GPT-4oDeepSeek R1 9.32 9.38 9.05 8.78 8.51 9.25 9.15 8.98 9.08 9.06 DeepSeek V3 9.22 9.15 9.14 8.77 8.54 9.12 9.05 9.00 8.95 8.99 Qwen Max 9.50 9.17 9.01 8.69 8.70 8.99 8.96 8.92 9.05 8.99 Qwen2.5-14B-Instruct 9.34 9.25 8.92 8.51 8.11 8.99 9.11 8.77 8.82 8.87 Qwen2.5-7B-Instruct 9.22 9.17 8.92 8.84 8.04 9.05 9.00 8.62 8.94 8.87 QwQ-PlusDeepSeek R1 9.85 9.87 9.24 9.05 8.78 9.75 9.85 9.09 9.88 9.49 DeepSeek V3 9.59 9.43 9.06 8.66 8.18 9.29 9.66 8.47 9.24 9.06 Qwen Max 9.90 9.25 9.03 8.78 8.11 9.54 9.56 8.79 9.70 9.18 Qwen2.5-14B-Instruct 9.83 9.21 9.05 8.23 7.88 9.22 9.45 8.48 9.02 8.94 Qwen2.5-7B-Instruct 9.02 9.28 8.79 8.82 7.16 9.33 9.31 7.98 9.35 8.78 HumanDeepSeek R1 7.17 9.11 8.71 8.80 8.42 8.86 9.15 8.79 9.35 8.71 DeepSeek V3 7.45 8.12 8.16 8.17 7.84 7.56 8.08 8.01 7.03 7.82 Qwen Max 7.72 7.94 8.21 8.15 7.89 7.99 7.85 8.39 8.42 8.06 Qwen2.5-14B-Instruct 7.66 7.38 7.92 7.56 7.55 7.84 7.31 7.91 7.36 7.61 Qwen2.5-7B-Instruct 6.78 7.63 7.93 7.74 6.79 7.86 7.79 7.55 7.42 7.50 Table 14: Scenario-Level Average Scores Evaluated by Different Evaluators. Max values in each column per evaluator are bolded. Full names of each scenarios can be found in Section 3.1. from both human and model evaluators on the sample data, ranking the generation models in each scenario according to these average values, and then selecting samples generated by the best-performing
https://arxiv.org/abs/2505.16160v3
model in each scenario from all distilled data. The second strategy focuses on selecting optimal models for each evaluation metric. We calculate the average score of each generation model on individual metrics, rank them accordingly to identify the best model for each metric. During the final distilled data screening phase, if a piece of data was generated by a model that has been recognized as optimal in any evaluation metric, it will be included in the final finetuning dataset. We use the following settings for model training: The learning rate is set to 1.0×10−5, with a batch size of 1 per GPU device. Gradient accumulation is applied over 8 steps, resulting in an effective batch size of 8. For parameter updating, we employ full fine-tuning, where all model parameters are updated during training. All experiments are conducted on 4 NVIDIA A100 GPUs, each with 40GB of memory. 29 Evaluator Model BFA CSI CRSC DKA EICP HOTS IFTC MGP PAS RPR RTC SEI Average DeepSeek R1DeepSeek R1 9.55 8.67 9.64 9.53 8.66 8.39 9.61 7.30 9.80 9.17 9.64 9.45 9.12 DeepSeek V3 9.58 8.47 9.48 9.30 9.32 7.53 9.39 7.48 8.92 9.05 9.32 9.10 8.91 Qwen Max 9.42 8.49 9.46 9.24 9.09 7.67 9.25 7.44 8.97 8.62 9.34 9.05 8.84 Qwen2.5-14B-Instruct 9.08 8.28 9.20 8.82 8.98 7.16 8.87 6.86 8.20 8.57 9.02 8.51 8.46 Qwen2.5-7B-Instruct 8.73 8.22 9.00 9.00 8.30 7.27 8.72 6.61 8.68 8.05 9.23 8.55 8.36 DeepSeek V3DeepSeek R1 9.51 8.75 9.44 9.45 7.61 8.53 9.47 7.76 9.64 8.85 9.14 9.06 8.93 DeepSeek V3 9.57 8.61 9.25 9.27 7.23 7.98 9.21 7.56 8.94 8.76 9.00 8.59 8.66 Qwen Max 9.38 8.53 9.12 9.23 7.43 7.99 9.16 7.85 9.05 8.57 9.00 8.61 8.66 Qwen2.5-14B-Instruct 9.28 8.50 9.03 9.14 7.14 7.81 8.94 7.55 8.71 8.35 8.82 8.25 8.46 Qwen2.5-7B-Instruct 9.27 8.55 9.08 9.12 6.77 7.86 8.96 7.05 8.95 8.42 8.82 8.53 8.44 GPT-4oDeepSeek R1 9.48 8.73 9.59 9.17 9.05 8.35 9.13 8.45 9.18 8.89 9.11 8.65 8.98 DeepSeek V3 9.54 8.72 9.51 9.05 9.14 8.05 9.16 8.59 8.95 8.75 9.02 8.63 8.93 Qwen Max 9.58 8.65 9.43 8.83 9.07 8.08 9.14 8.56 8.97 8.89 8.95 8.64 8.90 Qwen2.5-14B-Instruct 9.45 8.51 9.44 8.88 8.93 7.83 9.02 8.20 8.88 8.60 9.07 8.43 8.77 Qwen2.5-7B-Instruct 9.45 8.57 9.38 8.85 8.59 8.00 9.01 8.20 8.85 8.65 9.02 8.65 8.77 QwQ-PlusDeepSeek R1 9.78 8.47 9.78 9.82 9.70 8.19 9.65 8.35 9.86 9.61 9.70 9.58 9.37 DeepSeek V3 9.42 8.25 9.57 9.09 9.52 7.22 9.36 7.62 9.23 9.23 9.39 9.32 8.93 Qwen Max 9.64 8.39 9.59 9.47 9.30 7.48 9.45 7.68 9.39 9.10 9.48 9.36 9.03 Qwen2.5-14B-Instruct 9.49 8.20 9.48 8.98 9.20 7.10 9.15 7.64 8.77 8.83 9.41 9.06 8.78 Qwen2.5-7B-Instruct 9.08 8.10 9.31 8.98 8.91 7.02 9.03 7.18 9.09 8.61 9.30 9.33 8.66 HumanDeepSeek R1 8.97 8.60 8.98 8.94 8.86 8.56 8.77 8.20 9.26 7.95 8.91 8.92 8.74 DeepSeek V3 8.77 7.77 8.40 7.89 8.11 7.25 8.10 7.70 7.42 7.03 7.80 7.47 7.89 Qwen Max 8.81 8.01 8.52 8.27 8.23 7.59 8.10 7.70 7.89 7.31 8.09 7.74 8.02 Qwen2.5-14B-Instruct 8.74 7.76 8.26 7.79 7.86 6.88 7.77 6.97 7.02
https://arxiv.org/abs/2505.16160v3
KNN-SSD : Enabling Dynamic Self-Speculative Decoding via Nearest Neighbor Layer Set Optimization Mingbo Song1, Heming Xia2, Jun Zhang3, Chak Tou Leong2, Qiancheng Xu2,Wenjie Li2,Sujian Li1 1National Key Laboratory for Multimedia Information Processing, Peking University 2Department of Computing, The Hong Kong Polytechnic University 3College of Computer Science and Technology, Zhejiang University songmingbo@stu.pku.edu.cn; he-ming.xia@connect.polyu.hk Abstract Speculative Decoding (SD) has emerged as a widely used paradigm to accelerate the infer- ence of large language models (LLMs) with- out compromising generation quality. It works by efficiently drafting multiple tokens using a compact model and then verifying them in parallel using the target LLM. Notably, Self- Speculative Decoding proposes skipping cer- tain layers to construct the draft model, which eliminates the need for additional parameters or training. Despite its strengths, we observe in this work that drafting with layer skipping exhibits significant sensitivity to domain shifts, leading to a substantial drop in acceleration performance. To enhance the domain generaliz- ability of this paradigm, we introduce KNN-SSD , an algorithm that leverages K-Nearest Neigh- bor (KNN) search to match different skipped layers with various domain inputs. We evalu- ated our algorithm in various models and mul- tiple tasks, observing that its application leads to1.3×∼1.6×speedup in LLM inference.1 1 Introduction Large language models (LLMs) have proven highly capable in handling various downstream tasks (Tou- vron et al., 2023; OpenAI et al., 2024; Yang et al., 2025). However, the token-by-token generation in autoregressive decoding results in quadratic com- putational complexity, which presents significant efficiency challenges, particularly as model size increases. To address this challenge, speculative decoding (SD) has been proposed as a promising solution for lossless acceleration of LLM infer- ence (Xia et al., 2023; Leviathan et al., 2023; Chen et al., 2023). At each decoding step, SD uses a lightweight draft model to efficiently predict mul- tiple tokens, which are then verified in parallel by the target LLM to preserve the original output dis- tribution. The effectiveness of SD hinges on the 1Code in https://github.com/mbsong/KNN-SSD Summarization Reasoning Translation StoryT elling T ext2SQL1.101.151.201.251.301.351.40Speedup RatioSelfSD(Fix) SelfSD(Mix) KNN-SSD Figure 1: Average speedup results under task-by-task sample streams. The dashed line represents the average speedup ratio achieved by KNN-SSD . Results indicate that our KNN-SSD can achieve a stable speedup while Self-SD methods’ speedups decline, as they are sensitive to domain shifts. trade-off between drafting latency and speculation accuracy (Xia et al., 2024; Hu et al., 2025). Dur- ing inference, SD aims to both minimize latency and maximize accuracy to improve efficiency while maintaining output quality. Recent advancements in SD have significantly expanded the boundaries of the latency-accuracy trade-off by employing diverse techniques, such as integrating lightweight draft models into LLMs (Ankner et al., 2024; Zhang et al., 2025) or aligning a small model with a larger one (Kim et al., 2023; Bachmann et al., 2025) for speculative generation. However, these approaches inevitably require additional models, which increase the to- tal number of parameters and introduce additional training complexity. Addressing this concern, Self- SD (Zhang et al., 2024) has been proposed to se- lectively skip certain layers within the large model itself to
https://arxiv.org/abs/2505.16162v1
construct a compact draft model. In this work, we find that the selection of skipped layers is not universal . Instead, one skip- 1arXiv:2505.16162v1 [cs.CL] 22 May 2025 layer configuration could be sensitive to domain shifts. For example, when applying a configuration derived from the summarization task to other tasks, as shown in Figure 1, we observe a significant re- duction in speedup from 1.35 ×to less than 1.10 ×, highlighting the need for domain-specific adapta- tion. To tackle this issue, we propose KNN-SSD , a method for dynamically adjusting skip-layer con- figurations based on domain representations. The key goal of KNN-SSD is to optimize skipped layers specific to each domain, simulate realistic input scenarios, and accurately identify the domain of each sample . To achieve this goal, KNN-SSD in- tegrates three main features: (1) a skipped layer set optimization process for the specific domain of samples, (2) an input sample stream designed to simulate real-life user inputs better, and (3) a KNN model using LLM’s last hidden representations to identify the domain of input samples. Experiments are conducted using LLaMA-2 series (Touvron et al., 2023) and Qwen-2.5 se- ries (Yang et al., 2025) across various tasks, in- cluding summarization, reasoning, translation, sto- rytelling, and text-to-SQL. KNN-SSD achieves a 1.3×∼1.6×speedup compared to autoregressive decoding. This approach maintains over 80%to- ken acceptance rate across the LLaMA-2 series and over99%token acceptance rate across the Qwen-2 series, indicating high alignment potential between the draft model and the target LLM. Further anal- ysis validated the effectiveness of KNN-SSD across out-of-domain sample inputs and one dataset that contains various types of samples. To summarize, our key contributions are: 1.We introduce KNN-SSD , a self-speculative de- coding algorithm with a fine-grained skipped layer set selection, which adopts k-nearest neighbor search to retrieve a suitable skipped layer set for each input sample; 2.To evaluate our method, we design a dynamic input data stream that contains samples from diverse domains, and KNN-SSD can achieve a 1.3×∼1.6×speedup across different models without changing the generated tokens’ distri- bution. 2 Related Work Speculative Decoding (SD). Speculative Decod- ing (SD) aims to accelerate autoregressive text generation in LLMs without compromising out- put quality (Xia et al., 2023; Leviathan et al., 2023).It reduces decoding latency by predicting multiple future tokens using a draft model or internal mech- anisms, followed by verification and correction by the target LLM. Existing strategies include aligning small draft models with large models (Xia et al., 2023; Kim et al., 2023; Bachmann et al., 2025) or predicting ktokens in parallel (Cai et al., 2024; Wen et al., 2024). In another line of work, plug- and-play methods have been examined, with exam- ples including appending pseudo tokens (Fu et al., 2024) and skipping layers dynamically (Metel et al., 2024; Xia et al., 2025) during inference. Despite efficiency improvement, these methods often rely on auxiliary models or sub-optimal choices, hinder- ing scalability and effectiveness. The most related methods to our work include Self-SD (Zhang et al., 2024) and LayerSkip (Elhoushi et al., 2024), which also construct draft models by skipping interme- diate LLM
https://arxiv.org/abs/2505.16162v1
layers. However, both approaches are trained on a single data type and struggle with di- verse data streams. Our work aims to tackle this problem by integrating samples from various do- mains. Sparsity and Model Compression. Sparsity and model compression are essential for enhancing the efficiency of LLMs by reducing active parame- ters or computations during inference (Hu et al., 2022). Common approaches include parameter pruning (Frantar and Alistarh, 2023; Ashkboos et al., 2024; Sun et al., 2024), knowledge distil- lation (Huang et al., 2022; Gu et al., 2024; Wu et al., 2024), and quantization (Yao et al., 2022; Liu et al., 2023; Park et al., 2024), which compress models while preserving performance. Structured sparsity methods, such as layer skipping (Liu et al., 2024; Bhendawade et al., 2024; Xia et al., 2025) and dynamic sparsification, further enhance effi- ciency by adapting computation to input character- istics. While these works aim to optimize computa- tional workloads, they may sacrifice performance by using sub-optimal choices because of insuffi- cient search in the layer space. In contrast, our KNN-SSD method can always find optimal choices to accelerate LLM inference losslessly. 3 Background 3.1 Self-Speculative Decoding Unlike traditional SD methods that require an auxiliary draft model, Self-Speculative Decoding (Self-SD) leverages the LLM’s internal structure 2 Summarization Reasoning Translation StoryTelling Text2SQL0.91.01.11.21.31.4Speedup1.41 1.16 0.971.35 1.181.221.24 1.21.26 1.111.13 1.051.27 1.22 1.09 1.021.071.11.45 1.22 1.05 0.951.21 1.171.31Sum SL Re SL Tran SLSt SL SQL SLFigure 2: Different tasks have different optimal skip layer sets. "Sum SL" denotes the skip layer set opti- mized for the Summarization task. to draft tokens by selectively skipping certain lay- ers (Zhang et al., 2024). Given data x1, . . . , x nand the target LLM MwithLlayers including both attention and MLP layers, Self-SD aims to find an optimal z∈ {0,1}L, where z(i)= 1indicates that theith layer needs to be skipped and vice versa. A black-box function f(·)is used to assess the aver- age inference time per verified token: z∗= arg min zf M(z)|x1, . . . , x n . (1) Self-SD applies Bayesian optimization (Jones et al., 1998) to identify an optimal skip layer set by itera- tively selecting new zbased on a Gaussian process and evaluating with Eq(1). After a specified num- ber of iterations, the best zis considered an approx- imation of z∗and is fixed for inference. During de- coding, the selected layers are skipped to efficiently generate draft tokens, which are then validated in parallel by the full-parameter LLM to ensure the output distribution remains unchanged. 3.2 Preliminary Study While Self-SD improves inference efficiency, the optimal layers to skip vary significantly across dif- ferent tasks. To demonstrate this, we analyze the performance of SD across multiple representative tasks, including summarization, reasoning, story- telling, translation, and text-to-SQL. As shown in Figure 2, an optimized skip-layer configuration for one task does not generalize well to others. For example, a configuration that accelerates summa- rization degrades performance in reasoning tasks. These results show that the static skip-layer con- figuration is suboptimal. This limits its effective- ness, particularly in real-world scenarios where query
https://arxiv.org/abs/2505.16162v1
types are unpredictable. To achieve both high inference efficiency and minimal performance degradation, task-specific configurations are essen- tial. This motivates the development of KNN-SSD ,which dynamically selects the most suitable skip- layer configuration based on task characteristics, ensuring robust and efficient speculative decoding across diverse tasks. 4 Methodology We introduce KNN-SSD , a generalizable Self-SD method designed to improve inference efficiency while maintaining adaptability across diverse tasks. Figure 3 shows our method of accelerating infer- ence. It first generates enough last hidden vec- tors for each task during the pre-inference process. Then, a fixed number of vectors are selected as rep- resentative anchors to fit a KNN model. For each task, its optimal skip layer set is searched using a Bayesian optimization process. In the inference process, a new input data will find its cluster using the previous KNN model, and the corresponding skip layer set will be used for the target LLM. Fi- nally, we perform the standard Self-SD process, which contains two stages of drafting and verifica- tion to accelerate inference. By integrating these two processes, KNN-SSD provides a flexible and effective solution to accelerate LLM inference in real-world applications. 4.1 Pre Inference Given a set of domains D1, . . . , D n, we first ran- domly sample multiple instances from each do- main, denoted as di1, . . . , d imfor domain Di. Each sampled instance dijis then passed through a pre- trained LLM Mto obtain its last hidden vector representation vij. These samples are then aggre- gated and clustered into ngroups µ1, . . . , µ nusing the K-means algorithm, where the number of clus- ters is set to match the number of domains. For each cluster µi, we identify krepresentative an- chors based on their distance to the cluster centroid. The collection of selected anchors for cluster µi is denoted as Ai={ai1, . . . , a ik}, which will be used to fit a KNN model. The construction of the anchor set Aiis formally defined as follows: Ai= arg min S⊆Di,|S|=kX vij∈S∥vij−µi∥. (2) Subsequently, for each domain Di, we utilize the anchor set {ai1, . . . , a ik}to determine a domain- specific skip layer set zi∈ {0,1}L, where Lde- notes the total number of layers in the language model M. Each element z(j) iindicates whether 3 LLM OutputsBayes Optimization Last Hidden Vectors Last Hidden Vectors AttentionMLPAttentionMLP AttentionMLP0 0 1 0 0 1[ ] zAttentionMLPAttentionMLP AttentionMLP1 0 0 0 0 1[ ] zAttentionMLPAttentionMLP AttentionMLP0 1 0 0 1 0[ ] z input tokens generated tokens inter cluster intra cluster Query Skipped Layer Set . . . . . . Summarize Reasoning Translate LLM InputsSkipped Layer Set Optimization SSD via Querying Optimized Layer Set updateQueryClustering To solve this math problem, we ...Figure 3: Layer skipping and KNN process in KNN-SSD . Before LLM-related generation, KNN-SSD first performs (a) Layer set Searching Optimization. For each task, KNN-SSD generates a task-specific skip layer set and stores it in a configuration file; (b) Generate Anchor representatives. KNN-SSD then produces last hidden vectors for each task
https://arxiv.org/abs/2505.16162v1
to fit a KNN model. When a new sample is input, KNN-SSD first uses its last hidden vector as the input representative and queries the KNN model. Based on the retrieved result, it selects the corresponding skip layer set to perform decoding, thereby achieving acceleration. thej-th layer should be skipped (z(j) i= 1) or re- tained (z(j) i= 0) during inference. To identify the optimal configuration zi, we employ Bayesian Optimization (Jones et al., 1998) over the space of binary layer masks, aiming to minimize an ob- jective black-box function f(·)that measures the average inference time per verified token: z∗ i= arg min zif M(zi)|ai1, . . . , a ik .(3) Allz∗ 1, . . . , z∗ nwill be stored for future use and not be changed. 4.2 Inference For a newly arrived sample s, we first extract its last hidden vector vfrom the model. We then perform a KNN search based on cosine similarity between the hidden vector of sand all representative an- chors. This process yields a corresponding domain label, effectively classifying the sample into one of the known domains i∗. Based on the identified domain, we apply its associated optimal skip-layer configuration z∗ i∗toMto accelerate inference:i∗, j∗= arg max i,jv·aij ∥v∥ · ∥aij∥, (4) Domain (s) =i∗, (5) M←z∗ i∗. (6) We then perform the standard Self-SD pro- cess (Zhang et al., 2024), which involves two stages: drafting and verification. During the draft- ing stage, the LLM uses the previously selected skip-layer configuration zias a draft model M(zi) to generate a sequence of draft tokens: y′= arg max ylogP(y|x,y;M(zi)),(7) where xandydenote input and output generated by LLM, respectively, and y′represents the token produced by the autoregressive process. In the veri- fication stage, the full LLM verifies the draft tokens in a single forward pass. This step validates the cor- rectness of the generated tokens and either accepts them or triggers a re-drafting if discrepancies are found. To better simulate real-world task streams, we introduce the mix ratio r, which denotes the prob- ability that the next input sample belongs to a dif- ferent task than the current one. A mix ratio of 0 corresponds to a task-by-task input stream, where 4 Models MethodsR=0.0 R=0.3 R=0.7 R=1.0 Speed (token/s)Overall E(Spd.) E(Spd.) E(Spd.) E(Spd.) E(Spd.) LLaMA-2-13BVANILLA 1.00× 1.00× 1.00× 1.00× 13.62 1.00 × SELF-SD(F IX) 1.24 × 1.21× 1.19× 1.17× 16.34 1.20 × SELF-SD(M IX) 1.23 × 1.27× 1.24× 1.23× 16.88 1.24 × KNN-SSD 1.42× 1.45× 1.43× 1.45× 19.61 1.44 × LLaMA-2-13B -ChatVANILLA 1.00× 1.00× 1.00× 1.00× 13.22 1.00 × SELF-SD(F IX) 1.13 × 1.14× 1.08× 1.10× 14.67 1.11 × SELF-SD(M IX) 1.13 × 1.17× 1.17× 1.16× 15.33 1.16 × KNN-SSD 1.33× 1.36× 1.36× 1.37× 17.85 1.35 × Qwen-2.5-14BVANILLA 1.00× 1.00× 1.00× 1.00× 11.16 1.00 × SELF-SD(F IX) 1.25 × 1.23× 1.27× 1.28× 14.06 1.26 × SELF-SD(M IX) 1.40 × 1.36× 1.39× 1.38× 15.40 1.38 × KNN-SSD 1.60× 1.64× 1.63× 1.61× 18.08 1.62 × Qwen-2.5-14B -InstructVANILLA 1.00× 1.00× 1.00× 1.00× 10.79 1.00 × SELF-SD(F IX) 1.18 × 1.20× 1.20× 1.17× 12.84 1.19 × SELF-SD(M IX) 1.26 × 1.24× 1.27×
https://arxiv.org/abs/2505.16162v1
1.25× 13.49 1.25 × KNN-SSD 1.52× 1.49× 1.50× 1.52× 16.30 1.51 × Table 1: Comparison between KNN-SSD and two Self-SD methods. R indicates the mix ratio of sample streams. We report the expected speedup ratio under different mix ratios, average decoding speed (token/s) under greedy decoding, and average speedup ratio among different mix ratios. More details are provided in the Appendix C.3. all consecutive samples come from the same task. In contrast, a mix ratio of 1 indicates maximum task mixing, where every two consecutive samples are from different tasks. As the mix ratio grows, the frequency of domain shift increases. P(si+1∈Dj|si∈Dk) =( r N−1ifj̸=k, 1−rifj=k.(8) 5 Experiments 5.1 Experimental Setup Implementation Details. We mainly evaluate KNN-SSD on LLaMA-2 series (Touvron et al., 2023) and Qwen-2.5 series (Yang et al., 2025) across various tasks, including summarization, mathematical reasoning, storytelling, translation, and text-to-SQL. The evaluation datasets include CNN/Daily Mail (CNN/DM) (Nallapati et al., 2016), GSM8K (Cobbe et al., 2021), TinyS- tories (Eldan and Li, 2023), Wmt16 DE-EN (Wmt16) (Bojar et al., 2016), and Spider2 (Lei et al., 2025). For each dataset, we used Bayesian optimiza- tion2(BO) to perform 1,000 iterations in 8 rep- 2https://github.com/bayesian-optimization/ BayesianOptimizationresentative samples in search of the optimal skip- layer configuration. The representative samples are selected via the K-means algorithm from all last hidden vectors generated by the LLM in the corre- sponding dataset, ensuring optimal coverage of the feature space. The maximum generation lengths on CNN/DM, GSM8K, Wmt16, Spider2, and TinyS- tories are set to 64, 64, 64, 64, and 128, respec- tively. We conduct 1-shot evaluation for CNN/DM and TinyStories, 3-shot evaluation for Spider2, and 5-shot evaluation for GSM8K and Wmt16. For each dataset, we extracted the most representative k= 10 hidden vectors from the last hidden layer across all data samples using cosine similarity to serve as anchor points for the KNN model, follow- ing the same approach as introduced earlier in the BO framework. For each new input sample, we also compute the cosine similarity between its last hidden vector and the anchors, and assign it to the task of its nearest neighbor. Baselines. In our primary experiments, we com- pared KNN-SSD and Self-SD approach (Zhang et al., 2024) to assess their effectiveness. For the Self-SD method, we primarily simulated two scenarios. In the first scenario, a fixed skip-layer configuration was determined based on the first sample in the 5 Models Methods M α Speedup LLaMA-2 -13BVanilla 1.00 - 1.00 × Self-SD(Fix) 2.17 0.62 1.10 × Self-SD(Mix) 2.53 0.68 1.14 × KNN-SSD 3.12 0.88 1.34 × LLaMA-2 -13B-ChatVanilla 1.00 - 1.00 × Self-SD(Fix) 1.97 0.57 1.04 × Self-SD(Mix) 2.14 0.59 1.09 × KNN-SSD 2.87 0.85 1.28 × Table 2: The results demonstrate the mean accepted tokens, token acceptance rate, and actual speedup ratio obtained from our tests on the LLaMA-2 series, showing thatKNN-SSD outperforms two Self-SD methods in every metric. task stream and remained unchanged throughout the process, which is denoted as Self-SD(Fix). In the second scenario, the skip-layer configuration was adjusted by re-performing BO according to the task distribution within the stream, and the newly
https://arxiv.org/abs/2505.16162v1
searched configuration was subsequently applied for inference and also remained unchanged, which is denoted as Self-SD(Mix). Evaluation Metrics. We evaluate KNN-SSD using two standard metrics commonly adopted in evalu- ation: the mean generated length M(Stern et al., 2018) and the token acceptance rate α(Leviathan et al., 2023). Beyond these, we also report the expected decoding throughput in tokens per sec- ond, along with the expected wall-time speedup ratio compared to standard autoregressive decod- ing. Given Mandα, the expected speedup can be derived by the formula given by Leviathan et al. (2023): E(Spd.) =Mα (M−1)(1−r) +α(9) where rdenotes the ratio of skipped layers. 5.2 Main Result Table 1 presents the comparison between KNN-SSD and two Self-SD methods on generation tasks. In our experiments, we evaluate KNN-SSD under four settings: mix ratio = 0, 0.3, 0.7, and 1 separately, with 40 samples from five datasets each, 200 sam- ples in total. The experimental results demonstrate the following findings: (1) KNN-SSD shows superior efficiency over prior methods, achieving consistent speedups of 1.35×∼1.62×over vanilla autoregres- sive decoding across various models. (2) The mix Summarization Reasoning Translation StoryT elling T ext2SQL0.650.700.750.800.85Mean Acceptance RateSelfSD(Fix) SelfSD(Mix) KNN-SSD Summarization Reasoning Translation StoryT elling T ext2SQL2.22.42.62.83.03.2Mean Accepted T okens SelfSD(Fix) SelfSD(Mix) KNN-SSD Figure 4: The mean accepted tokens and mean accep- tance rate under task-by-task sample streams. The dashed lines represent the average length and rate achieved by KNN-SSD across all five datasets. ratio of sample flows doesn’t affect the speedup ofKNN-SSD . The speedup remains stable, which indicates that KNN-SSD can handle various samples in a more realistic scenario. We present the mean accepted tokens, accep- tance rate, as well as the actual speedup of LLaMA- 2-13B series in Table 2 and Figure 4, which further validates the superiority of KNN-SSD over Self-SD. 5.3 Analysis Inter & Intra. We use the MATH (Hendrycks et al., 2021) dataset to assess the capabilities of KNN-SSD in a single dataset with multiple domains. In the MATH dataset, math questions are catego- rized into seven types. Thus, using one specific skip layer set for this dataset is insufficient, and we intro- duce a fine-grained clustering to handle this mixed domain. Figure 6 shows that each type of math question can be clustered into a single group. Ta- ble 3 indicates the speedup result for each method, where we can clearly see that KNN-SSD outperforms Self-SD methods and achieves a speedup of 1.23 × 6 All Vectors Representative VectorsFigure 5: Visualization of last hidden vectors from five domains of samples. Results show these vectors can be clearly divided into five clusters. From each cluster, we selected ten vectors as representative anchors for our KNN model. Methods M α Speedup Vanilla 1.00 - 1.00 × Self-SD(Fix) 1.54 0.51 0.97 × Self-SD(Mix) 1.82 0.59 1.02 × KNN-SSD 2.37 0.81 1.23 × Table 3: Results of MATH dataset using LLaMA-2-13B. and a mean generated length Mof 2.37. Figure 7 visualizes speedups on task-by-task set- tings. Self-SD-Fix achieves high performance only on the first subtask and declines on the rest of the subtasks; while KNN-SSD has a better speedup
https://arxiv.org/abs/2505.16162v1
than the other two Self-SD methods. Out of Domain Generalization. We adopt the XSUM (Narayan et al., 2018) dataset, the MATH (Hendrycks et al., 2021) dataset, and the Alpaca (Taori et al., 2023) dataset as out-of-domain tasks to assess KNN-SSD ’s generalizability. XSUM and CNN/DM datasets belong to summarization tasks, whereas MATH and GSM8K datasets in- volve reasoning-based tasks. Therefore, although we did not search for their respective optimal skip layer sets for XSUM and MATH in our exper- iments, it is reasonable that KNN-SSD would as- sign XSUM samples to CNN/DM and thus adopt CNN/DM’s optimal skip layer set, and the same applies to MATH samples. Compared to these two datasets, the Alpaca dataset contains more diverse instruction-answer pairs across summarization, reasoning, grammar, All Vectors Representative VectorsFigure 6: Visualization of 700 last hidden vectors from the MATH dataset using t-SNE method. It is clear to see that all vectors can be categorized into 7 groups, which aligns with the fact that the MATH dataset has 7 different kinds of problems. Algebra Counting and ProbabilityGeometry Intermediate AlgebraNumber TheoryPrealgebra Precalculus0.91.01.11.21.3Speedup 1 1 1 1 1 1 11.23 1.21 1.16 1.12 1.031.05 0.971.121.14 1.091.1 1.06 1.031.021.27 1.211.231.221.26 1.241.23Baseline Self-SD(Fix)Self-SD(Mix) KNN-SSD Figure 7: Speedup results for task-by-task sample streams on MATH dataset under three methods. While KNN-SSD maintains a speedup of around 1.25 ×, two Self-SD methods decline as the number of domains grows. and many other tasks. Results indicate that al- though some of the domains are not covered by our five datasets in the main experiments, our method can still assign an unknown sample to its most similar domain and thus achieve inference accel- eration. As shown in Table 4, our experimental results demonstrate that the model achieves an ap- proximately 1.15× ∼ 1.25×speedup under the KNN-SSD method, even without prior search. Number of Clusters. Table 5 shows the result of the influence of cluster numbers. We conducted experiments on the Alpaca dataset as it covers a variety of domains, using K-means clustering with varying numbers of clusters. As shown in the results, the speedup effect improves as the num- ber of clusters increases, eventually surpassing the speedup ratio observed in the out-of-domain exper- iments (Table 4). However, when the cluster count 7 Datasets Methods M α Speedup XSUMVanilla 1.00 - 1.00 × Self-SD 1.42 0.56 0.99 × KNN-SSD 2.51 0.84 1.24 × MATHVanilla 1.00 - 1.00 × Self-SD 1.34 0.48 0.93 × KNN-SSD 2.13 0.76 1.17 × AlpacaVanilla 1.00 - 1.00 × Self-SD 1.26 0.43 0.92 × KNN-SSD 1.95 0.67 1.15 × Table 4: Results of out-of-domain datasets using LLaMA-2-13B-Chat. No representative anchor of these three domains is generated. exceeds 5 (e.g., up to 7), the speedup plateaus, indi- cating that partitioning Alpaca into five clusters is sufficient—further subdivision yields no additional gains. Num. M α Speedup 1 1.86 0.65 1.05 × 3 2.20 0.75 1.17 × 5 2.52 0.80 1.23 × 7 2.55 0.81 1.23 × Table 5: Results of the Alpaca dataset among different numbers of clusters using LLaMA-2-13B-Chat. Num. denotes the number of clusters. Case Study. To better illustrate
https://arxiv.org/abs/2505.16162v1
how our method works, we provide a case study that presents a typ- ical sample stream. In Figure 8, a sample stream contains three common types of queries a user might ask: summarization, reasoning, and trans- lation. For each input query, KNN-SSD will first compute its last hidden vector and then use a KNN model to find its optimal skipped layer set. The typ- ical speculative decoding will be conducted with draft and verification steps, where blue and red to- kens indicate that they are generated separately in draft and verification steps. By constantly chang- ing skipped layer sets, KNN-SSD achieves a stable speedup compared to other methods that use a static strategy, which is insufficient for diverse inputs. 6 Conclusion In this work, we introduce KNN-SSD , an algorithm that leverages K-Nearest Neighbor search to match Searching for nearest skip layer set Article: Carlos Tevez talks about being as free as a bird, back to the days when he was banging in the goals in Argentina for Boca Juniors. A big hit on the famous steeped terracing at La Bombonera as he followed in the footsteps of Diego Maradona ...... Summary: Juventus forward Carlos Tevez discusses his enjoyment of playing for the club and his form this season, with six Champions League goals so far. The team is looking to beat Monaco in the quarter-finals and ...... Searching for nearest skip layer set Question: Maddison has 5 boxes with 50 marbles in each box. Then she gets 20 marbles from her friend. How many marbles does she have now? Answer: Here 's the solution step -by-step :\n\n1. Maddison has 5 boxes , each with 50 mar bles. \n\nSo, in total, Maddison has 5 x 50 = 250 ...... Searching for nearest skip layer set Translate German to English: "Es ist für mich wirklich", sagte Spielberg, "unstrittig der größte Zeitreise-Film, der jemals gedreht wurde." Here 's the translation :\n\n"It is truly, in my opinion ," said Spielberg, " the greatest time travel film that has ever been made."Figure 8: Case study of how KNN-SSD works. Blue to- kens indicate that they are generated during the drafting step and verified by the model, while red tokens indicate they are generated by prediction from the verification step. Squares in red and blue indicate skipped attention layers and MLP layers, respectively. suitable skipped layers for various domain inputs. KNN-SSD is designed to find an optimal skipped layer set for each domain of data, which accelerates LLM’s inference losslessly. To assess its ability, we define a mix ratio of a sample stream, indicat- ing how frequently the domain changes. We con- ducted extensive experiments with various LLMs and mix ratios and found that KNN-SSD can achieve a speedup of around 1.3×∼1.6×without changing the ordinary distribution of the generated tokens. Our in-depth analysis indicates that a single dataset may also contain mixed domains. Furthermore, KNN-SSD can achieve a 1.2×speedup on out-of- domain datasets, showing its great potential in han- dling various data streams in real-life scenarios. 8 Limitations A few limitations need to be considered while our
https://arxiv.org/abs/2505.16162v1
KNN-SSD achieves a notable speedup on various models. First, we did not incorporate draft tree ver- ification, which has been shown to improve the to- ken acceptance rate (Xia et al., 2025). Second, our current evaluation is limited to models of moderate scale. Due to practical considerations related to computational resources, we have not yet extended our method to larger-scale models. We leave these directions for future work. Ethics Statement The datasets used in our experiment are publicly released and labeled through interaction with hu- mans in English. In this process, user privacy is protected, and no personal information is contained in the dataset. The scientific artifacts that we used are available for research with permissive licenses. And the use of these artifacts in this paper is consis- tent with their intended use. Therefore, we believe that our research work meets the ethics of ACL. References Zachary Ankner, Rishab Parthasarathy, Aniruddha Nrusimha, Christopher Rinard, Jonathan Ragan- Kelley, and William Brandon. 2024. Hydra: Sequentially-dependent draft heads for medusa de- coding. Preprint , arXiv:2402.05109. Saleh Ashkboos, Maximilian L. Croci, Marcelo Gennari do Nascimento, Torsten Hoefler, and James Hens- man. 2024. Slicegpt: Compress large language models by deleting rows and columns. Preprint , arXiv:2401.15024. Gregor Bachmann, Sotiris Anagnostidis, Albert Pumarola, Markos Georgopoulos, Artsiom Sanakoyeu, Yuming Du, Edgar Schönfeld, Ali Thabet, and Jonas Kohler. 2025. Judge decoding: Faster speculative sampling requires going beyond model alignment. Preprint , arXiv:2501.19309. Nikhil Bhendawade, Irina Belousova, Qichen Fu, Henry Mason, Mohammad Rastegari, and Mahyar Najibi. 2024. Speculative streaming: Fast llm inference with- out auxiliary models. Preprint , arXiv:2402.11131. Ond rej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, Matteo Negri, Aurelie Neveol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Specia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 conference on machine translation. In Proceedings of the First Conferenceon Machine Translation , pages 131–198, Berlin, Ger- many. Association for Computational Linguistics. Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D. Lee, Deming Chen, and Tri Dao. 2024. Medusa: Simple llm inference acceleration framework with multiple decoding heads. Preprint , arXiv:2401.10774. Charlie Chen, Sebastian Borgeaud, Geoffrey Irving, Jean-Baptiste Lespiau, Laurent Sifre, and John Jumper. 2023. Accelerating large language model decoding with speculative sampling. Preprint , arXiv:2302.01318. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. CoRR , abs/2110.14168. Ronen Eldan and Yuanzhi Li. 2023. Tinystories: How small can language models be and still speak coherent english? Preprint , arXiv:2305.07759. Mostafa Elhoushi, Akshat Shrivastava, Diana Liskovich, Basil Hosmer, Bram Wasti, Liangzhen Lai, Anas Mahmoud, Bilge Acun, Saurabh Agarwal, Ahmed Roman, Ahmed Aly, Beidi Chen, and Carole-Jean Wu. 2024. LayerSkip: Enabling early exit inference and self-speculative decoding. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 12622–12642, Bangkok, Thailand. Association for Computational Linguistics. Elias Frantar
https://arxiv.org/abs/2505.16162v1
and Dan Alistarh. 2023. SparseGPT: Mas- sive language models can be accurately pruned in one-shot. In Proceedings of the 40th International Conference on Machine Learning , volume 202 of Proceedings of Machine Learning Research , pages 10323–10337. PMLR. Yichao Fu, Peter Bailis, Ion Stoica, and Hao Zhang. 2024. Break the sequential dependency of LLM inference using lookahead decoding. In Forty-first International Conference on Machine Learning . Yuxian Gu, Li Dong, Furu Wei, and Minlie Huang. 2024. Minillm: Knowledge distillation of large language models. Preprint , arXiv:2306.08543. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. Preprint , arXiv:2103.03874. Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen- Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations . 9 Yunhai Hu, Zining Liu, Zhenyuan Dong, Tianfan Peng, Bradley McDanel, and Sai Qian Zhang. 2025. Spec- ulative decoding and beyond: An in-depth survey of techniques. Preprint , arXiv:2502.19732. Yukun Huang, Yanda Chen, Zhou Yu, and Kathleen McKeown. 2022. In-context learning distillation: Transferring few-shot learning ability of pre-trained language models. Preprint , arXiv:2212.10670. Donald R Jones, Matthias Schonlau, and William J Welch. 1998. Efficient global optimization of ex- pensive black-box functions. Journal of Global opti- mization , 13:455–492. Sehoon Kim, Karttikeya Mangalam, Suhong Moon, Ji- tendra Malik, Michael W Mahoney, Amir Gholami, and Kurt Keutzer. 2023. Speculative decoding with big little decoder. In Advances in Neural Information Processing Systems , volume 36, pages 39236–39256. Curran Associates, Inc. Fangyu Lei, Jixuan Chen, Yuxiao Ye, Ruisheng Cao, Dongchan Shin, Hongjin Su, Zhaoqing Suo, Hongcheng Gao, Wenjing Hu, Pengcheng Yin, Victor Zhong, Caiming Xiong, Ruoxi Sun, Qian Liu, Sida Wang, and Tao Yu. 2025. Spider 2.0: Evaluating language models on real-world enterprise text-to-sql workflows. Preprint , arXiv:2411.07763. Yaniv Leviathan, Matan Kalman, and Yossi Matias. 2023. Fast inference from transformers via spec- ulative decoding. In Proceedings of the 40th Inter- national Conference on Machine Learning , volume 202 of Proceedings of Machine Learning Research , pages 19274–19286. PMLR. Yijin Liu, Fandong Meng, and Jie Zhou. 2024. Accelerating inference in large language models with a unified layer skipping strategy. Preprint , arXiv:2404.06954. Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, and Vikas Chan- dra. 2023. Llm-qat: Data-free quantization aware training for large language models. Preprint , arXiv:2305.17888. Michael R. Metel, Peng Lu, Boxing Chen, Mehdi Reza- gholizadeh, and Ivan Kobyzev. 2024. Draft on the fly: Adaptive self-speculative decoding using cosine similarity. Preprint , arXiv:2410.01028. Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Ça˘glar Gu ˙lçehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of the 20th SIGNLL Conference on Computational Natural Lan- guage Learning , pages 280–290, Berlin, Germany. Association for Computational Linguistics. Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! Topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018Conference on
https://arxiv.org/abs/2505.16162v1
Empirical Methods in Natural Lan- guage Processing , Brussels, Belgium. OpenAI, Josh Achiam, Steven Adler, Sandhini Agar- wal, Lama Ahmad, Ilge Akkaya, et al. 2024. Gpt-4 technical report. Preprint , arXiv:2303.08774. Gunho Park, Baeseong Park, Minsub Kim, Sungjae Lee, Jeonghoon Kim, Beomseok Kwon, Se Jung Kwon, Byeongwook Kim, Youngjoo Lee, and Dong- soo Lee. 2024. Lut-gemm: Quantized matrix mul- tiplication based on luts for efficient inference in large-scale generative language models. Preprint , arXiv:2206.09557. Mitchell Stern, Noam Shazeer, and Jakob Uszkoreit. 2018. Blockwise parallel decoding for deep autore- gressive models. In Advances in Neural Information Processing Systems , volume 31. Curran Associates, Inc. Mingjie Sun, Zhuang Liu, Anna Bair, and J. Zico Kolter. 2024. A simple and effective pruning approach for large language models. Preprint , arXiv:2306.11695. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https:// github.com/tatsu-lab/stanford_alpaca . Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. Preprint , arXiv:2307.09288. Zhuofan Wen, Shangtong Gui, and Yang Feng. 2024. Speculative decoding with ctc-based draft model for llm inference acceleration. In Advances in Neural Information Processing Systems , volume 37, pages 92082–92100. Curran Associates, Inc. Minghao Wu, Abdul Waheed, Chiyu Zhang, Muham- mad Abdul-Mageed, and Alham Fikri Aji. 2024. LaMini-LM: A diverse herd of distilled models from large-scale instructions. In Proceedings of the 18th Conference of the European Chapter of the Associa- tion for Computational Linguistics (Volume 1: Long Papers) , pages 944–964, St. Julian’s, Malta. Associa- tion for Computational Linguistics. Heming Xia, Tao Ge, Peiyi Wang, Si-Qing Chen, Furu Wei, and Zhifang Sui. 2023. Speculative decod- ing: Exploiting speculative execution for accelerat- ing seq2seq generation. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2023 , pages 3909–3925, Singapore. Association for Com- putational Linguistics. Heming Xia, Yongqi Li, Jun Zhang, Cunxiao Du, and Wenjie Li. 2025. Swift: On-the-fly self-speculative decoding for llm inference acceleration. Preprint , arXiv:2410.06916. 10 Heming Xia, Zhe Yang, Qingxiu Dong, Peiyi Wang, Yongqi Li, Tao Ge, Tianyu Liu, Wenjie Li, and Zhi- fang Sui. 2024. Unlocking efficiency in large lan- guage model inference: A comprehensive survey of speculative decoding. In Findings of the Asso- ciation for Computational Linguistics: ACL 2024 , pages 7655–7671, Bangkok, Thailand. Association for Computational Linguistics. An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, et al. 2025. Qwen2.5 technical report. Preprint , arXiv:2412.15115. Zhewei Yao, Reza Yazdani Aminabadi, Minjia Zhang, Xiaoxia Wu, Conglong Li, and Yuxiong He. 2022. Zeroquant: Efficient and affordable post-training quantization for large-scale transformers. In Ad- vances in Neural Information Processing Systems , volume 35, pages 27168–27183. Curran Associates, Inc. Jun Zhang, Jue Wang, Huan Li, Lidan Shou, Ke Chen, Gang Chen, and Sharad Mehrotra. 2024. Draft & verify: Lossless large language model acceleration via self-speculative decoding. In Proceedings of the 62nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers) , pages 11263–11282, Bangkok, Thailand. Association for Computational Linguistics. Lefan Zhang, Xiaodan Wang, Yanhua Huang,
https://arxiv.org/abs/2505.16162v1
and Ruiwen Xu. 2025. Learning harmonized rep- resentations for speculative sampling. Preprint , arXiv:2408.15766. 11 A Preliminary Details We visualize the optimal skipped layer sets we searched across five tasks on two series of models in Figure 9 and Figure 10. B Datasets We mainly evaluate KNN-SSD on LLaMA-2 (Tou- vron et al., 2023) series and Qwen-2.5 (Yang et al., 2025) series across diverse tasks. We select five dif- ferent datasets, covering summarization, mathemat- ical reasoning, translation, storytelling, and text-to- SQL, which are CNN/Daily Mail (CNN/DM) (Nal- lapati et al., 2016), GSM8K (Cobbe et al., 2021), TinyStories (Eldan and Li, 2023), Wmt16 DE-EN (Wmt16) (Bojar et al., 2016), and Spider2 (Lei et al., 2025) datasets, respectively. The maxi- mum generation lengths on CNN/DM, GSM8K, Wmt16, Spider2, and TinyStories are set to 64, 64, 64, 64, and 128, respectively. We conduct 1-shot evaluation for CNN/DM and TinyStories, 3-shot evaluation for Spider2, and 5-shot evaluation for GSM8K and Wmt16. For further analysis, we also use the XSUM (Narayan et al., 2018) dataset, the MATH (Hendrycks et al., 2021) dataset, and the Alpaca (Taori et al., 2023) dataset for the summa- rization, mathematical reasoning, and instruction following tasks, respectively. CNN/DM The CNN/Daily Mail dataset is a large- scale benchmark for abstractive text summarization. It consists of long news articles paired with short summaries, derived from the CNN and Daily Mail websites. The dataset is used to evaluate the perfor- mance on long-form input and coherent summary generation. GSM8K GSM8K is a high-quality benchmark dataset for arithmetic reasoning, consisting of grade school math word problems and their de- tailed step-by-step solutions. It is used to evaluate the reasoning and problem-solving capabilities in mathematical contexts. TinyStories TinyStories is a dataset of short, syn- thetically generated children’s stories designed to support research on language modeling and narra- tive understanding. The stories are simple in struc- ture and vocabulary, making the dataset suitable for studying controlled text generation. Wmt16 The WMT16 De-En dataset is a standard benchmark for machine translation, consisting of parallel German-English sentence pairs collectedfrom various sources. It is used to evaluate the translation quality of models. Spider2 Spider 2.0 is a complex and cross- domain text-to-SQL benchmark designed to eval- uate the ability of models to generate executable SQL queries from natural language questions. It includes diverse databases and query types, requir- ing models to generalize to unseen schemas and handle intricate reasoning. XSUM XSUM is an abstractive summarization dataset consisting of BBC news articles paired with single-sentence summaries, in contrast to CNN/DM, which provides longer, multi-sentence summaries for news articles. It emphasizes con- cise and information-rich summaries, testing the models’ ability to extract key information. MATH The MATH dataset is a benchmark for mathematical problem solving, comprising high school-level competition problems with detailed step-by-step solutions. It covers a wide range of topics, including algebra, counting and probabil- ity, geometry, intermediate algebra, number the- ory, prealgebra, and precalculus, and is designed to evaluate the advanced reasoning and symbolic manipulation abilities of language models. Alpaca The Alpaca dataset is a collection of instruction-following demonstrations generated us- ing
https://arxiv.org/abs/2505.16162v1
the self-instruct method, based on the outputs of a strong language model. It covers a wide range of tasks, making it suitable for us to test the gener- alizability of KNN-SSD . C Experimental Details C.1 Setups During the pre-inference stage, we set the maxi- mum iterations of Bayesian Optimization to 1,000 and the number of samples to 8. For each dataset, we first randomly choose 1,000 last hidden vec- tors, then we use the K-means algorithm to find 10 representatives as anchors for the KNN model. In the inference process, experiments were con- ducted on 8 ×NVIDIA RTX 3090 GPU (24GB) and 4×NVIDIA RTX A6000 GPU (40GB) with CUDA 12.0, and an Intel(R) Xeon(R) Gold 5117 CPU with 14 cores. Pytorch and Huggingface transformers package are used to perform both baselines and our method. 12 MLP 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 ATT 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79(a) Summarization - CNN/DM MLP 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 ATT 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 (b) Reasoning - GSM8K MLP 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 ATT 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 (c) Translation - WMT16 MLP 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 ATT 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 (d) Storytelling - TinyStories MLP 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64
https://arxiv.org/abs/2505.16162v1
66 68 70 72 74 76 78 80 ATT 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 (e) Text-to-SQL - Spider2 Figure 9: Visualization of skipped layer set configuration of LLaMA-2-13B optimized by Self-SD (Zhang et al., 2024) on different task domains. Gray squares indicate retained layers, red squares denote skipped attention layers, and blue squares signify skipped MLP layers. C.2 Evaluation Metrics We further demonstrate the two main metrics we used in the main experiments. The mean accepted length Mdenotes the average number of output tokens produced by the target LLM during each forward pass. The token acceptance rate αrefers to the ratio of tokens that are accepted by the tar- get LLM to the total number of draft steps, which showcases the expectation of whether the target LLM accepts a token generated by the draft models. Given Mandα, the expected wall-time speedup can be derived as follows: E(Speedup) =Mα (M−1)c+α(10) where cis defined as the cost efficient in Leviathan et al. (2023). It represents the ratio of the draft model’s required time to the target model’s during a single forward pass. In the Self-SD method, we define c= 1−r, where rrepresents the proportion of skipped layers to total layers, as the draft model only needs to process the retained layers. C.3 Details of Main Results More details are provided in Table 6. Results show that our KNN-SSD outperforms the two Self-SD methods on both metrics, indicating our methodcan handle a more diverse input stream with stable inference acceleration. 13 MLP 2 4 6 810 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 ATT 1 3 5 7 911 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95(a) Summarization - CNN/DM MLP 2 4 6 810 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 ATT 1 3 5 7 911 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 (b) Reasoning - GSM8K MLP 2 4 6 810 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60
https://arxiv.org/abs/2505.16162v1
62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 ATT 1 3 5 7 911 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 (c) Translation - WMT16 MLP 2 4 6 810 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 ATT 1 3 5 7 911 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 (d) Storytelling - TinyStories MLP 2 4 6 810 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 ATT 1 3 5 7 911 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 (e) Text-to-SQL - Spider2 Figure 10: Visualization of skipped layer set configuration of Qwen-2.5-14B optimized by Self-SD (Zhang et al., 2024) on different task domains. Models MethodsR=0.0 R=0.3 R=0.7 R=1.0 Overall M α M α M α M α M α LLaMA-2-13BVANILLA 1.00 - 1.00 - 1.00 - 1.00 - 1.00 - SELF-SD(F IX) 2.22 0.65 2.19 0.63 2.14 0.59 2.12 0.61 2.17 0.62 SELF-SD(M IX) 2.50 0.64 2.58 0.70 2.53 0.69 2.52 0.68 2.53 0.68 KNN-SSD 3.10 0.86 3.14 0.88 3.11 0.89 3.12 0.88 3.12 0.88 LLaMA-2-13B -ChatVANILLA 1.00 - 1.00 - 1.00 - 1.00 - 1.00 - SELF-SD(F IX) 2.03 0.60 1.99 0.56 1.92 0.55 1.97 0.57 1.97 0.57 SELF-SD(M IX) 2.10 0.56 2.14 0.61 2.18 0.59 2.15 0.58 2.14 0.59 KNN-SSD 2.84 0.84 2.85 0.86 2.90 0.85 2.90 0.86 2.87 0.85 Qwen-2.5-14BVANILLA 1.00 - 1.00 - 1.00 - 1.00 - 1.00 - SELF-SD(F IX) 2.41 0.82 2.40 0.82 2.44 0.84 2.48 0.85 2.43 0.83 SELF-SD(M IX) 3.02 0.89 2.94 0.89 2.99 0.90 2.97 0.90 2.98 0.90 KNN-SSD 4.35 0.99 4.42 1.00 4.40 1.00 4.38 0.99 4.37 1.00 Qwen-2.5-14B -InstructVANILLA 1.00 - 1.00 - 1.00 - 1.00 - 1.00 - SELF-SD(F IX) 2.12 0.80 2.16 0.80 2.16 0.80 2.10 0.79 2.13 0.80 SELF-SD(M IX) 2.32 0.83 2.25 0.84 2.35 0.87 2.34 0.87 2.32 0.85 KNN-SSD 3.78 1.00 3.69 0.99 3.71 0.99 3.75 1.00 3.73 1.00 Table 6: Comparison between KNN-SSD and two Self-SD methods. R indicates the mix ratio of sample streams. We report
https://arxiv.org/abs/2505.16162v1
arXiv:2505.16164v1 [cs.CL] 22 May 2025Can LLMs Simulate Human Behavioral Variability? A Case Study in the Phonemic Fluency Task Mengyang Qiu1*Zoe Brisebois1Siena Sun2 1Department of Psychology, Trent University, Canada 2Speech-Language Pathology Program, Saint Elizabeth University, United States {mengyangqiu, zoebrisebois}@trentu.ca ssun@steu.edu Abstract Large language models (LLMs) are increas- ingly explored as substitutes for human par- ticipants in cognitive tasks, but their ability to simulate human behavioral variability re- mains unclear. This study examines whether LLMs can approximate individual differences in the phonemic fluency task, where partici- pants generate words beginning with a target letter. We evaluated 34 model configurations, varying prompt specificity, sampling tempera- ture, and model type, and compared outputs to responses from 106 human participants. While some configurations, especially Claude 3.7 Son- net, matched human averages and lexical pref- erences, none reproduced the scope of human variability. LLM outputs were consistently less diverse and structurally rigid, and LLM ensem- bles failed to increase diversity. Network anal- yses further revealed fundamental differences in retrieval structure between humans and mod- els. These results highlight key limitations in using LLMs to simulate human cognition and behavior. 1 Introduction Large language models (LLMs) have rapidly ad- vanced in recent years, achieving impressive per- formance across a wide range of natural language tasks. As a result, researchers have become increas- ingly interested in using LLMs as experimental tools in cognitive and behavioral science. Some even propose that LLMs could replace human par- ticipants in certain studies, offering scalable and efficient alternatives for simulating human behavior (Dillion et al., 2023). This idea has intuitive appeal: LLMs can gener- ate fluent responses on demand, and arguably en- code the “wisdom of the crowd” from their massive training data (Trott, 2024). For instance, Hansen and Hebart (2022) found that GPT-3 could generate *Corresponding author.semantic features for concepts that not only mir- rored the distribution of human-generated features but also matched human norms in their ability to predict similarity, relatedness, and category mem- bership. On the other hand, a growing body of work has pointed out that LLMs may lack a crucial feature of human language and cognition: vari- ability. Cuskley et al. (2024) argue that LLMs are limited by their training on written language, which captures only a narrow slice of human communica- tive behavior. Zanotto and Aroyehun (2024) offer empirical support, showing that LLM-generated texts exhibit substantially lower linguistic diversity than human-written texts across a range of features, including syntax, vocabulary, and style. This lack of variability also becomes evident when examining how LLMs perform in tasks designed to probe semantic or associative struc- tures. In a semantic fluency task where partici- pants named as many animals as possible, Wang et al. (2025) found that LLM-generated semantic networks were structurally different from those of humans. Compared to human networks, those of LLMs exhibited weaker local associations, poorer global integration, and greater rigidity in semantic organization. Similarly, Haim et al. (2025) exam- ined word association networks in a STEM-related mindset task and found that GPT-3.5 produced net- works that were notably sparser and less intercon- nected than those generated by
https://arxiv.org/abs/2505.16164v1
human participants. It is worth noting that Wang et al. (2025) rec- ognized this limitation and attempted to address it by prompting LLMs to role-play 30 different oc- cupations, effectively simulating multiple distinct agents. However, the resulting semantic networks still failed to match the flexibility and associative structure of human data. Crucially, real human participants, even those drawn from relatively ho- mogeneous groups such as college students, con- sistently exhibit meaningful individual differences. These differences may reflect not only variation 1 in the amount, type, and organization of semantic knowledge, but also differences in executive func- tion, and strategy use. Of course, because LLMs are optimized to encode and reproduce stable se- mantic relationships, consistency across prompts may reflect a design feature. From this perspec- tive, the internal coherence and semantic regularity that make LLMs so effective at language model- ing may also constrain their ability to simulate the variability that characterizes human cognition. If semantic-based tasks limit LLMs’ ability to simulate human behavioral variability, it raises the question: how would they perform on a task that relies less on meaning? One such task is phonemic (or letter) fluency, where individuals are asked to produce as many words as possible that begin with a particular letter, such as F, within a fixed time limit. Unlike semantic fluency, this task requires an effortful search through the mental lexicon based on orthographic or phonological form–an unnat- ural retrieval strategy in everyday language use. Prior analyses have shown that phonemic fluency performance is influenced by both lexical proper- ties, such as word frequency, age of acquisition, and phonetic similarity, and cognitive factors like switching patterns (Cho et al., 2021). The goal of the present study is to explore whether LLMs are more capable of simulating inter- participant variability in a less semantically driven challenge–namely, the phonemic fluency task–or whether their outputs remain rigid, perhaps driven by frequency-based or alphabetic heuristics. Along- side this primary question, we examine several ad- ditional factors that may influence the diversity and human-likeness of LLM-generated responses: 1.Participant context prompts: We systemati- cally vary the type of information provided to the model, including demographic details (age, education) and performance-level information (number of correct responses), to assess whether real participant metadata supports more individ- ualized outputs. 2.Sampling temperature: Building on prior work that links temperature to creative or divergent generation (Peeperkorn et al., 2024), we test whether adjusting this parameter affects re- sponse diversity and whether higher tempera- tures introduce human-like variation or merely add noise. 3.Model architecture and reasoning capacity: We compare general-purpose LLMs (e.g., GPT-4.1)to reasoning-oriented models (e.g., O3-mini) to examine whether models designed to engage in multi-step thinking produce more flexible or human-aligned behavior. 4.Ensemble production: Finally, we investigate whether sampling outputs from multiple models, rather than relying on single model outputs, can better approximate the range of responses ob- served across human participants. While recent ensemble methods have primarily focused on improving task accuracy by using techniques like sampling and majority voting to converge on a correct answer (Li et al., 2024), our goal is the opposite: to
https://arxiv.org/abs/2505.16164v1
explore whether ensembling can simulate the distributional variability seen in human behavior. In preview, our findings suggest a clear answer: LLMs are highly capable of producing fluent and correct answers in the phonemic fluency task, but none of them match the full scope of human be- havioral variability. The remainder of the paper is organized as follows. Section 2 describes the human F fluency dataset, the full range of LLM configurations tested, and our approach to simulat- ing participant-level outputs under different prompt and temperature conditions. Section 3 evaluates how well LLMs replicate human response counts and lexical diversity across different prompting se- tups and models. Section 4 presents item-level analyses, examining how linguistic features (e.g., word frequency, age of acquisition, phonological neighbors) predict production frequency in humans versus models. Section 5 turns to structural compar- isons of word co-occurrence networks, analyzing clustering, path length, and representational sim- ilarity. Section 6 concludes with a discussion of implications for using LLMs as behavioral proxies and offers recommendations for improving vari- ability in future simulations. 2 Method 2.1 Human data source We used verbal fluency data collected by Qiu and Johns (2021)1, who investigated noun- and verb- based semantic fluency across two experiments. In both experiments, participants first completed a phonemic fluency task using the letter Fas a 1The verbal fluency data are publicly available on the Open Science Framework under the Creative Commons Attribution 4.0 International License (CC BY 4.0). We adhered to all applicable terms for reuse and attribution. 2 familiarization trial before proceeding to semantic tasks. A total of 106 native English speakers (age: M= 35.59, SD = 10.04; education: M= 14.92 years, SD = 2.01) were recruited via Amazon Mechanical Turk and completed the tasks online. Audio recordings were obtained for all participants’ responses to the Ffluency task. Following the original study’s procedure, we used the Google Speech-to-Text API2to transcribe the recordings. Transcripts were then manually reviewed to identify and exclude errors, includ- ing repetitions, out-of-category responses, same- lemma variants (e.g., fish,fishing ,fished ), and counting behavior, where multiple number words (e.g., four,five,fourteen ) were produced. Participants produced an average of 17.27 words (SD = 4.99), which slightly decreased to 16.89 words ( SD = 4.84) after manual corrections. This small reduction indicates overall good adherence to task instructions. However, this average was lower than participants’ performance on the seman- tic fluency tasks in the same study, especially the noun-based conditions ( animals , and living or non- living things ), where the mean number of responses exceeded 22. This pattern highlights the greater difficulty of phonemic fluency relative to semantic fluency. 2.2 LLM simulation procedure We evaluated a diverse set of LLMs from six major providers. For closed-source models, we used the official APIs from OpenAI3, Anthropic4, Google5, and xAI6. For open-source models, we accessed models from Meta and Alibaba via the Together.ai API7. Each configuration was prompted to simulate all 106 human participants.8 To assess behavioral realism, we systematically varied prompt specificity and sampling tempera- ture. A representative full prompt is shown in Figure 1. This version included
https://arxiv.org/abs/2505.16164v1
the participant age, education, and number of correct responses. We also tested three reduced-information variants: a demographic-only prompt (no number of cor- rect responses), a performance-only prompt (no 2https://cloud.google.com/speech-to-text 3https://openai.com/api/ 4https://www.anthropic.com/api 5https://ai.google.dev/gemini-api/docs 6https://x.ai/api 7https://www.together.ai 8All LLM-generated outputs used in this study will be made publicly available under the Creative Commons Attribu- tion 4.0 International License (CC BY 4.0).demographics), and a no-information prompt. All prompts followed the same task format and instruc- tion wording as the human study, and models were instructed to output one word per line. In total, 34 distinct model configurations were evaluated. Appendix A provides detailed specifica- tions for each model, including provider, tempera- ture settings, and prompt type. 3 Participant-Level Analysis 3.1 Number of responses Models from leading LLM providers were evalu- ated on their ability to simulate human performance in the letter Ffluency task, with additional testing of the latest models from OpenAI (GPT-4.1) and Anthropic (Claude 3.7 Sonnet) using limited par- ticipant information. Human participants produced an average of 16.89 correct responses within the one-minute time constraint ( SD = 4.84). Most model configurations, when provided with full in- formation or at least performance information (i.e., number of correct responses), generated similar response counts to humans (after removing repeti- tions and other errors), suggesting generally suc- cessful adherence to the instructed constraints. When provided with only demographic informa- tion (age and education), GPT-4.1 generated sig- nificantly more words ( M= 37.06, SD = 7.95) and Claude 3.7 Sonnet showed similar overpro- duction ( M= 37 .46, SD = 6.51)–both more than doubling human output. This pattern was even more pronounced in the no-information condi- tion, with GPT-4.1 generating higher counts ( M= 40.17, SD = 7.29) and Claude 3.7 Sonnet produc- ing comparable results ( M= 41.96, SD = 9.55). Surprisingly, O4-mini, one of OpenAI’s latest reasoning models, demonstrated extreme overes- timation with demographic information ( M= 76.46, SD = 36.80)–nearly 60 more words than human participants. Even when performance in- formation was explicitly provided to this model, it still overproduced with considerable variance (M= 28 .81, SD = 27 .34), indicating incon- sistent adherence to the performance constraints specified in the prompt. Interestingly, with the full prompt that included both demographic and performance information, O4-mini accurately re- produced human-like performance in terms of re- sponse count. For several other models, even with full partic- ipant information, we observed unstable produc- 3 prompt = ( "In a verbal fluency task , you will be asked to say as many words as you can think of that conform to a specified criterion within 1 minute . " " Please DO NOT say words that are proper nouns ( like Bob or Boston ), numbers , or the same words with different endings ( for example , love -> loves , lover , loving ). Please DO NOT say phrases or sentences . Please DO NOT use a dictionary , internet , or other external help .\n\n" " Criterion : Words that begin with the letter F\n\n" "A human participant completed this task on Amazon Mechanical Turk .\n\n"
https://arxiv.org/abs/2505.16164v1
" Here is their demographic and performance information :\n" f" Age : { age }\n" f" Highest degree : { education }\n" f" Number of correct responses : { num_correct }\n\n" "Now , please imagine that you are this participant . Your task is to generate as many words as possible that begin with the letter F, as if you were speaking aloud , and you have exactly one minute to do so. Respond in a way that reflects how this human participant might perform under this timed condition . " " Output only the words , one per line . Do not include any introductions , explanations , or extra text . If you add anything other than the words , you will be disqualified from the task ." ) Figure 1: Full prompt used to simulate the Ffluency task, including participant age, education, and number of correct responses, with instructions identical to those given to human participants tion patterns characterized by extreme bursts of re- sponses, sometimes generating around 100 words for some participants while appropriately follow- ing instructions for others. In particular, OpenAI’s GPT-4-turbo ( M= 74.06, SD = 61.13) and O3 (M= 51.56, SD = 46.32) models generated ex- cessive output with high variance, suggesting these models inconsistently applied the performance con- straints specified in the prompt. Table 1 summarizes the 11 model configura- tions that significantly overestimated human per- formance at the group level. Additionally, four more configurations, including Google’s Gemini 2.0 and Gemini 2.5 (with and without thinking en- abled), and Alibaba’s Qwen-2.5-72B, showed simi- lar bursts of responses for individual participants, though these did not reach statistical significance at the group level. These 15 model configurations were excluded from subsequent analyses to focus on models with more realistic human simulation. Table 1: LLM configurations with significant overesti- mation of number of responses Provider Model Prompt Type Mean ±SD Diff. t Human – – 16.89±4.84 – – Models with Demographic-Info or No-Info Prompts OpenAI GPT-4.1 Demographic 37.06±7.95 20 .17 22 .31∗∗∗ No 40.17±7.29 23 .28 27 .39∗∗∗ O4-mini (reasoning) Demographic 76.46±36.80 59 .58 16 .53∗∗∗ Anthropic Claude 3.7 Sonnet Demographic 37.46±6.51 20 .58 26 .13∗∗∗ No 41.96±9.55 25 .08 24 .12∗∗∗ Models with Performance-Info or Full Prompts OpenAI GPT-4-turbo Full 74.06±61.13 57 .17 9 .60∗∗∗ O3 (reasoning) Full 51.56±46.32 34 .67 7 .66∗∗∗ O4-mini (reasoning) Performance 28.81±27.34 11 .92 4 .42∗∗∗ Anthropic Claude 3.5 Haiku Full 21.93±12.57 5 .05 3 .86∗∗∗ Meta Llama 4 Maverick Full 37.42±47.50 20 .54 4 .43∗∗∗ Llama 3.3 70B Full 21.92±17.78 5 .04 2 .81∗∗ Note: Diff. = mean difference from human performance;∗∗∗p < . 001;∗∗p < . 013.2 Variability of responses To evaluate the variability of responses across hu- man participants and LLM-simulated participants, we computed the Type-to-Token Ratio (TTR) and the Idiosyncratic Type-to-Total Type Ratio (ITTR). TTR captures lexical diversity by dividing the num- ber of unique words (types) by the total number of responses (tokens). A higher TTR indicates a wider range of vocabulary produced. ITTR measures how many of those types were idiosyncratic (i.e., pro- duced by
https://arxiv.org/abs/2505.16164v1
only one participant) relative to the total number of types. This provides insight into how consistent or individualized the generated words are across participants within the group (Castro et al., 2021). As shown in Table 2, human participants pro- duced the highest number of unique types (476) and idiosyncratic types (201), resulting in a TTR of 0.27 and an ITTR of 0.42. No LLMs approached this level of variability. The most diverse LLM output came from Claude 3.7 Sonnet at tempera- ture 1.0 (226 types), followed closely by O3-mini, the best-performing reasoning model (184 types). However, their ITTRs (0.32 and 0.28, respectively) remained well below the human benchmark. As expected, increasing temperature led to more diverse responses for both GPT-4.1 and Claude 3.7 Sonnet, with higher TTR and ITTR values at higher temperatures, though they still failed to match human-like variability. We also tested how 4 prompt specificity affected variability: for GPT-4.1, removing demographic details while retaining per- formance information slightly increased TTR and ITTR, but for Claude 3.7, the pattern reversed. A trend across model generations also emerged: hold- ing temperature constant at 0.7, lexical diversity decreased from GPT-3.5 to GPT-4o to GPT-4.1. This suggests that newer models may prioritize consistency and safety over variability. Finally, we examined whether ensembles of LLMs could recover some of the lost variability by combining outputs from diverse models. Three ensemble strategies were tested: sampling from the top five models in terms of type count, from all models with at least 100 types (9 in total), and from all models that received the full prompt (17 in total). As shown in the bottom portion of Table 2, none of the ensemble approaches exceeded the lex- ical diversity of the best individual model. The top-five ensemble achieved 225 types, nearly iden- tical to Claude 3.7 Sonnet at temperature 1.0. This suggests that between-model differences in lexical choices are limited, and ensemble generation alone is insufficient to reach human-level diversity. Table 2: Type-to-token ratios and idiosyncratic type-to- total type ratios across human participants and LLM- simulated participants Provider Model Temperature Types Idio. Types Tokens TTR ITTR Human – 476 201 1790 0.27 0.42 OpenAI GPT-4.1 0.3 57 10 1766 0.03 0.18 0.7 (default) 69 18 1778 0.04 0.26 1.1 108 37 1783 0.06 0.34 1.5 138 38 1777 0.08 0.28 GPT-4.1 (performance) 0.7 (default) 79 21 1778 0.04 0.27 GPT-4o 0.7 (default) 128 38 1773 0.07 0.30 GPT-3.5 0.7 (default) 148 48 1783 0.08 0.32 O3-mini (reasoning) – 184 52 1790 0.10 0.28 O4-mini (reasoning) – 88 24 1790 0.05 0.27 Anthropic Claude 3.7 Sonnet 0.3 97 28 1787 0.05 0.29 0.7 150 39 1788 0.08 0.26 1.0 226 73 1781 0.13 0.32 Claude 3.7 Sonnet (performance)0.7 137 35 1787 0.08 0.26 Claude 3.7 Sonnet (thinking enabled)– 137 43 1788 0.08 0.31 xAI Grok-3 0.7 100 22 1790 0.06 0.22 1.0 94 18 1790 0.05 0.19 Grok 3 Mini (reasoning) 0.7 61 16 1788 0.03 0.26 1.0 71 19 1793 0.04 0.27 Open-Source (Meta) Llama 4 Scout default 83 25 1783 0.05 0.30 LLM Ensembles Top 5
https://arxiv.org/abs/2505.16164v1
Models (Word Types)– 225 75 1778 0.13 0.33 100+ Word Types (9 models)– 180 46 1785 0.10 0.26 All-Model Mix (17 models)– 169 45 1780 0.09 0.27 4 Item-Level Analysis 4.1 Distribution of production frequency Previous studies have observed that word produc- tion in verbal fluency tasks typically follows Zipf’s law, a type of power-law relationship where wordfrequency is inversely proportional to its rank (e.g., Taler et al., 2020). This distribution is character- ized by a small number of words that are produced frequently across participants, followed by a “long tail” of words that are produced infrequently, often appearing only once or twice in the entire dataset. In the current study, human participants’ word pro- duction closely followed this expected pattern, with anαcoefficient of 0.89and a strong goodness of fit (R2= 0.92). The most frequently produced word by human participants was “fun”, mentioned 34 times, followed by “fire” (30 times) and “fan” (28 times). In contrast, LLMs, both single and ensembled models, demonstrated a notably different pattern. While all models showed good fit to a power-law distribution ( R2ranging from 0.87to0.91), their higher αcoefficients ( 1.19–1.28) reflect a steeper decline in frequency, with top-ranked words dom- inating the distribution more heavily than in hu- man data. For instance, O3-mini produced “fast” 83 times, and All-Model Mix generated “fish” 90 times–frequencies substantially higher than those observed in human responses. The Top 5 Models ensemble demonstrated the closest fit to the human distribution patterns ( R2= 0.91), though still with a steeper αcoefficient ( 1.19). Table 3 presents the Zipf’s law goodness-of-fit statistics for all models. Table 3: Fit of word production frequencies to Zipf’s law across models Model α R2 Human 0.89 0.92 Claude 3.7 Sonnet 1.19 0.88 O3-mini 1.25 0.87 Top 5 Models (Word Types) 1.19 0.91 Models with 100+ Word Types 1.22 0.88 All-Model Mix 1.28 0.90 4.2 Linguistic variables and production frequency Previous research has shown that verbal fluency per- formance is influenced by multiple linguistic vari- ables. For instance, Taler et al. (2020) reported that semantic neighborhood size and word frequency were significant predictors of word production in semantic fluency tasks, with words from denser se- mantic neighborhoods and higher frequency being produced more often. Similarly, Cho et al. (2021) found that phonemic fluency production was in- fluenced by lexical-semantic factors such as word 5 frequency, familiarity, word duration, and age of acquisition. In this study, we examined the relationship between production frequency and core linguis- tic variables obtained from the English Lexicon Project (Balota et al., 2007), including word length (number of letters), word frequency (log frequency per million words from SUBTLEX corpus), ortho- graphic neighborhood size (number of words that differ by one letter), phonological neighborhood size (number of words that differ by one phoneme), semantic neighborhood size (number of semanti- cally related words), and age of acquisition (esti- mated age when a word is typically learned). We first conducted simple correlation analyses between these variables and production frequency. As shown in Table 4, all linguistic variables ex- cept for semantic neighborhood size were signif- icantly correlated
https://arxiv.org/abs/2505.16164v1
with frequency of responses in human data. Word frequency showed the strongest positive correlation ( r= 0.55), indicating that high-frequency words were more likely to be pro- duced. Word length and age of acquisition were negatively correlated with production frequency (r=−0.39andr=−0.52, respectively), suggest- ing a preference for shorter and earlier-acquired words. Both orthographic and phonological neigh- borhood sizes, often closely correlated and some- times used interchangeably in psycholinguistic studies, showed positive correlations, with denser neighborhoods associated with higher production frequency. Among the LLM models, Claude 3.7 Sonnet most closely approximated the human pattern, with highly similar correlation coefficients across vari- ables. O3-mini followed a similar trend but ex- hibited generally weaker correlations, particularly for orthographic neighborhood size. The ensem- ble models followed the same general trend but with progressively weaker correlations, especially for the phonological and orthographic neighbor- hood variables. Despite differences in correlation strength, the ranking of variable importance was consistent across models: word frequency and age of acquisition showed the strongest correlations, while semantic neighborhood size remained non- significant. This aligns with the phonemic nature of the task, which prioritizes phonological over semantic retrieval processes–a contrast to seman- tic fluency tasks where semantic factors are more predictive (Taler et al., 2020). To formally establish the factors that influenceTable 4: Correlations between linguistic variables and production frequency across models Model Length WF Ortho_N Phono_N Sem_N AoA Human −0.39∗∗∗0.55∗∗∗0.43∗∗∗0.38∗∗0.01 −0.52∗∗∗ Claude 3.7 Sonnet −0.34∗∗∗0.54∗∗∗0.33∗∗∗0.32∗∗∗−0.02 −0.49∗∗∗ O3-mini −0.31∗∗∗0.40∗∗∗0.21∗∗0.31∗∗∗0.01 −0.38∗∗∗ Top 5 Models (Word Types) −0.25∗∗∗0.39∗∗∗0.22∗∗0.22∗∗0.04 −0.46∗∗∗ Models with 100+ Word Types −0.25∗∗∗0.32∗∗∗0.18∗0.17∗0.08 −0.49∗∗∗ All-Model Mix −0.25∗∗∗0.31∗∗∗0.15 0 .13 0 .06 −0.49∗∗∗ Note. Length = word length; WF = log word frequency (SUBTLEX); Ortho_N = orthographic neighborhood size; Phono_N = phonological neighborhood size; Sem_N = semantic neighborhood size; AoA = age of acquisition. ∗∗∗p < . 001;∗∗p < . 01;∗p < . 05 production frequency, we conducted stepwise re- gression analyses with all linguistic variables exam- ined in the correlation analyses above. Full statisti- cal details are provided in Appendix B, but a consis- tent pattern emerged: linguistic variables explained the most variance in production frequency for hu- man responses (adjusted R2=.396), followed closely by Claude 3.7 Sonnet (adjusted R2=.356), with substantially less variance explained for O3- mini (adjusted R2=.179) and the ensemble mod- els (adjusted R2=.224–.238). Claude 3.7 Sonnet most closely matched the hu- man pattern in both variance explained and pre- dictors retained, with word frequency and age of acquisition emerging as significant in both. Ortho- graphic neighborhood size was uniquely important for humans, while word length–absent from the human model–played a stronger role in Claude and O3-mini. The ensemble models diverged more sharply, with age of acquisition emerging as the dominant–or only–predictor, and word frequency playing a minimal or negligible role. These pat- terns suggest both shared and divergent influences on word production across systems, with greater unpredictability in the outputs of O3-mini and en- semble models compared to humans and Claude. 5 Network Analysis Building on our earlier findings, we focused the network analysis exclusively on comparing human and Claude 3.7 Sonnet outputs. Claude consistently exhibited the most human-like patterns–response
https://arxiv.org/abs/2505.16164v1
count, variability, production frequency distribu- tion, and linguistic predictors of word choice. In contrast, O3-mini and the ensemble models showed substantially divergent patterns with lower explained variance and different predictor pro- files, suggesting fundamentally different underly- ing mechanisms driving their word production. 5.1 Network construction approach To better understand the organizational structure of word production in letter Ffluency, we constructed 6 correlation-based networks–a common approach in cognitive science for examining the relationship between concepts (Borodkin et al., 2016; Li and Qiu, in press; Siew and Guru, 2023). Given the different number of nodes in the full networks (476 words for humans, 226 for Claude), direct com- parison of network metrics may be confounded by size differences (Borodkin et al., 2016). Therefore, we selected 197 words that were produced by both humans and Claude participants to ensure a fairer comparison of network structure. The network construction process involved three main steps: (1) creating binary matrices indicat- ing which words each participant produced, (2) calculating word-word cosine similarity matrices based on co-occurrence patterns across participants, and (3) applying a thresholding procedure (TMFG; Massara et al., 2017) to identify meaningful con- nections between words. 5.2 Structural comparison of human and Claude networks We compared the structural properties of networks constructed from human and Claude 3.7 Sonnet outputs using two standard global metrics: clus- tering coefficient (CC) and average shortest path length (ASPL). CC quantifies the degree to which nodes in a network tend to cluster together (i.e., the likelihood that two neighbors of a node are also connected), while ASPL reflects the average number of steps required to connect any two nodes, indexing the global efficiency of the network (Siew et al., 2019). Both networks were constructed from the same set of 197 shared words to control for network size. The human network showed a higher CC (0.42) and longer ASPL ( 5.32) compared to Claude (CC= 0.37,ASPL = 4.40). To assess whether the observed structures differed from random expec- tations, we simulated 1,000 Erd ˝os-Rényi random networks (Erd ˝os and Rényi, 1960) matched in size and density. One-sample z-tests confirmed that both human and Claude networks significantly de- viated from randomness, exhibiting substantially higher CCs and slightly higher ASPLs than ran- dom networks (human: z=−2460.1for CC, z=−3765.1for ASPL; Claude: z=−2105.2 for CC, z=−2410.6for ASPL; all p < . 001). To statistically compare the two networks, we conducted a subnetwork bootstrap procedure: 1,000 partial networks were generated for each sys- tem by randomly sampling 50% of nodes and re- Average Shortest Path Length Clustering Coefficient Human Claude Human Claude0.30.40.50.6 23456 NetworkValue Network Human ClaudeBased on 1,000 bootstrap subnetworks (50% of nodes)Comparison of Network PropertiesFigure 2: Clustering coefficient and average shortest path length for human and Claude networks based on 1,000 bootstrap subnetworks (50% nodes) calculating network metrics (Borodkin et al., 2016; Qiu et al., 2021). As shown in Figure 2, human sub- networks had significantly higher CCs ( t= 19.71, p < . 001) and longer ASPLs ( t= 11 .48,p < .001) than Claude subnetworks. These differences indicate that human responses are organized into
https://arxiv.org/abs/2505.16164v1
tighter local clusters with weaker global integra- tion, whereas Claude responses form a more evenly connected network with greater global efficiency. To complement the network analysis, we con- ducted a representational similarity analysis (RSA; Nili et al., 2014) comparing the original pairwise similarity matrices that underlie network construc- tion. The Spearman correlation between the lower triangular portions of the human and Claude matri- ces was modest but significant ( ρ=.22,p < . 001), providing additional evidence that Claude orga- nizes or retrieves words differently than humans. Taken together with the network metric results, this suggests that while Claude can produce human-like words and respond to similar lexical variables, the global structure and retrieval dynamics it exhibits remain meaningfully distinct from human perfor- mance. 6 Discussion This study evaluated whether LLMs can simulate the behavioral variability observed in human per- formance on a classic cognitive task: phonemic fluency. Across 34 configurations of state-of-the- art LLMs, we systematically examined the impact of prompt specificity, sampling temperature, model type, and ensemble strategies, comparing model- generated outputs to human Ffluency data across three levels of analysis. While some configurations, most notably Claude 3.7 Sonnet at high temper- 7 ature when prompted with full participant infor- mation, produced outputs that broadly resembled human responses in terms of average word count and sensitivity to linguistic features, no model fully captured the scope of individual differences or the structural properties characteristic of human lexical retrieval. At the participant level, most LLM configura- tions achieved plausible response counts, particu- larly when given performance information. How- ever, lexical diversity remained consistently lower than in human participants, and this gap persisted across temperature settings and model types. Mod- els like Claude 3.7 Sonnet produced slightly more diverse responses than others, but even the best- performing configurations failed to reach human- level variation in word types. These findings high- light a persistent tendency toward output rigidity: models tend to repeat the same frequent words across different simulated “participants”. This consistency persisted even when models were prompted with detailed demographic and per- formance information, and even when responses were sampled at higher temperatures. The role- play strategy used in prior work (Wang et al., 2025) was conceptually similar to our demographic con- ditioning, but both approaches appear insufficient for inducing human-like variability in model out- put. Moreover, ensemble outputs across LLMs, as implemented here, did not recover the distribu- tional diversity observed in human responses. This is likely because most model configurations pro- duced overlapping sets of frequent words. In fact, model ensembles amplified high-frequency core vocabulary and suppressed rare or idiosyncratic re- sponses, further narrowing rather than expanding the response space. At the item level, further differences emerged in the distribution of word production frequencies. While both human and LLM data followed power- law distributions, the steeper decline in LLM data revealed a stronger bias toward high-frequency words, with less diversity in the “long tail” of in- frequent responses. When examining the linguistic factors driving word production, we found some similarities between humans and LLMs, with word frequency and age of acquisition emerging
https://arxiv.org/abs/2505.16164v1
as sig- nificant predictors across systems. However, impor- tant differences were also evident. For human par- ticipants, orthographic neighborhood size played a unique and substantial role, with words having more neighbors being produced more frequently.In contrast, this factor was less influential in LLM outputs, which instead showed greater sensitivity to word length. These patterns suggest that humans may rely more heavily on form-based associations during word retrieval, while LLMs appear to prior- itize more surface-level features. Perhaps most revealing were the network anal- yses, which uncovered fundamental differences in how humans and LLMs organize lexical re- trieval. Human responses exhibited stronger lo- cal clustering but weaker global integration, in contrast to Claude 3.7 Sonnet, the most human- like model, which optimized for network-wide ef- ficiency. These structural mismatches underscore a core limitation of current LLMs: they can pro- duce plausible words but fail to approximate the associative dynamics of human memory search. Taken together, our results have important impli- cations for researchers considering LLMs as substi- tutes for human participants. While LLMs excel at producing grammatical, contextually appropriate outputs, they do not capture the full spectrum of variability seen in human behavioral data. This lim- itation is particularly relevant for cognitive tasks where individual differences are of theoretical in- terest. Researchers who use LLMs as proxies for human behavior should be aware that they may sig- nificantly underestimate the true variability in their target population. That said, among the models tested, Claude 3.7 Sonnet consistently emerged as the most human-like, and performed far better than other configurations. Future work could focus on extending this model’s capabilities through more targeted persona modeling, potentially incorporat- ing linguistic and cognitive profiles (e.g., selective reading exposure, task history) to better simulate participant-level differences. In sum, while LLMs continue to impress in terms of fluency and surface-level plausibility, our findings highlight the challenges of using them as substitutes for human participants in behavioral research–especially when individual variability and cognitive process modeling are central. Phonemic fluency reveals these limitations clearly, offering a strong case for developing new strategies if LLMs are to be used meaningfully in the simulation of human behavior. Limitations This study focuses on a single verbal fluency task (letter F) in English, limiting generalizabil- 8 ity across task types, letters, and languages. While phonemic fluency provides a stringent test of form- based retrieval, different letters may elicit distinct lexical or structural patterns. Similarly, results may differ in languages with different orthographic or phonological systems. Our analysis also cen- ters on a specific subset of models and parame- ter settings; other prompting techniques (e.g., few- shot examples, chain-of-thought reasoning) or fine- tuned models may yield different patterns. Addi- tionally, while we simulated participant-level re- sponses using demographic and performance meta- data, we did not incorporate richer individual-level traits (e.g., reading history, cognitive profiles) that may be necessary to drive human-like variability. Future work should expand to multilingual and cross-task comparisons, explore more cognitively grounded persona modeling, and evaluate whether newer architectures or sampling strategies better capture human behavioral diversity. References David A Balota, Melvin J Yap,
https://arxiv.org/abs/2505.16164v1
Keith A Hutchison, Michael J Cortese, Brett Kessler, Bjorn Loftis, James H Neely, Douglas L Nelson, Greg B Simpson, and Rebecca Treiman. 2007. The English lexicon project. Behavior Research Methods , 39(3):445–459. Katy Borodkin, Yoed N Kenett, Miriam Faust, and Nira Mashal. 2016. When pumpkin is closer to onion than to squash: The structure of the second language lexicon. Cognition , 156:60–70. Nichol Castro, Taylor Curley, and Christopher Hertzog. 2021. Category norms with a cross-sectional sample of adults in the united states: Consideration of cohort, age, and historical effects on semantic categories. Behavior Research Methods , 53:898–917. Sunghye Cho, Naomi Nevler, Natalia Parjane, Christo- pher Cieri, Mark Liberman, Murray Grossman, and Katheryn AQ Cousins. 2021. Automated analysis of digitized letter fluency data. Frontiers in Psychology , 12:654214. Christine Cuskley, Rebecca Woods, and Molly Flaherty. 2024. The limitations of large language models for understanding human language and cognition. Open Mind , 8:1058–1083. Danica Dillion, Niket Tandon, Yuling Gu, and Kurt Gray. 2023. Can ai language models replace human partici- pants? Trends in Cognitive Sciences , 27(7):597–600. Paul Erd ˝os and Alfréd Rényi. 1960. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci , 5(1):17–60.Edith Haim, Lars van den Bergh, Cynthia SQ Siew, Yoed N Kenett, Daniele Marinazzo, and Massimo Stella. 2025. Cognitive networks highlight differ- ences and similarities in the stem mindsets of human and llm-simulated trainees, experts and academics. arXiv . Hannes Hansen and Martin N Hebart. 2022. Semantic features of object concepts generated with gpt-3. In Proceedings of the Annual Meeting of the Cognitive Science Society , volume 44. Junyou Li, Qin Zhang, Yangbin Yu, Qiang Fu, and De- heng Ye. 2024. More agents is all you need. arXiv . Yu Li and Mengyang Qiu. in press. A network analy- sis of the semantic evolution of ‘fruit’ and ‘stone’ in Tibeto-Burman languages. Poznan Studies in Con- temporary Linguistics . Guido Previde Massara, Tiziana Di Matteo, and Tomaso Aste. 2017. Network filtering for big data: Triangu- lated maximally filtered graph. Journal of Complex Networks , 5(2):161–178. Hamed Nili, Cai Wingfield, Alexander Walther, Li Su, William Marslen-Wilson, and Nikolaus Kriegeskorte. 2014. A toolbox for representational similarity anal- ysis. PLoS Computational Biology , 10(4):e1003553. Max Peeperkorn, Tom Kouwenhoven, Dan Brown, and Anna Jordanous. 2024. Is temperature the creativity parameter of large language models? arXiv . Mengyang Qiu, Nichol Castro, and Brendan T. Johns. 2021. Structural comparisons of noun and verb net- works in the mental lexicon. In Proceedings of the Annual Meeting of the Cognitive Science Society , vol- ume 43, pages 1649–1655. Mengyang Qiu and Brendan T. Johns. 2021. A distri- butional and sensorimotor analysis of noun and verb fluency. PsyArXiv . Cynthia SQ Siew and Anutra Guru. 2023. Investigating the network structure of domain-specific knowledge using the semantic fluency task. Memory & Cogni- tion, 51(3):623–646. Cynthia SQ Siew, Dirk U Wulff, Nicole M Beckage, and Yoed N Kenett. 2019. Cognitive network science: A review of research on cognition through the lens of network representations, processes, and dynamics. Complexity , 2019. Vanessa Taler, Brendan T Johns, and Michael N Jones.
https://arxiv.org/abs/2505.16164v1
2020. A large-scale semantic analysis of verbal flu- ency across the aging spectrum: Data from the Cana- dian Longitudinal Study on Aging. The Journals of Gerontology Series B: Psychological Sciences and Social Sciences , 75(9):e221–e230. Sean Trott. 2024. Large language models and the wis- dom of small crowds. Open Mind , 8:723–738. 9 Ye Wang, Yaling Deng, Ge Wang, Tong Li, Hongjiang Xiao, and Yuan Zhang. 2025. The fluency-based semantic network of LLMs differs from humans. Computers in Human Behavior: Artificial Humans , 3:100103. Sergio E Zanotto and Segun Aroyehun. 2024. Human variability vs. machine consistency: A linguistic anal- ysis of texts generated by humans and large language models. arXiv . A Overview of LLM Providers, Prompts, and Sampling Temperatures We evaluated 34 model configurations across six providers–OpenAI, Anthropic, Google, xAI, Meta (LLaMA), and Alibaba (Qwen)–systematically varying model family (including general-purpose vs. reasoning-focused models and different model generations), temperature, and prompt specificity. In subsequent analyses, we also examined ensem- ble combinations of these models to assess re- sponse variability. Table A1 summarizes these con- figurations. Prompt specificity ranged from full participant profiles to minimal or absent informa- tion, designed to test how different inputs affected response count and variability. B Stepwise Regression Analyses of Linguistic Predictors of Production Frequency For human data, the stepwise regression yielded a significant model [ F(3,390) = 86 .85, p < . 001] that accounted for 40.0%of the variance (adjusted R2=.396). Word frequency emerged as the strongest predictor ( β=.34, t= 6.04, p < . 001), followed by orthographic neighborhood size ( β= .254, t= 5.91, p < . 001), and age of acquisition (β=−.18, t=−3.17, p=.002). This indicates that more frequent, orthographically dense, and earlier-acquired words were produced more often in the task. For Claude 3.7 Sonnet, stepwise regression like- wise produced a significant model [ F(3,215) = 41.13, p < . 001] explaining 36.5%of the vari- ance in production frequency (adjusted R2= .356). As with humans, word frequency was the strongest predictor ( β=.37, t= 5.16, p < . 001). However, unlike humans, orthographic neighbor- hood size was not retained in the model; instead, word length emerged as a significant predictor (β=−.20, t=−3.56, p < . 001). Age of ac- quisition remained a significant predictor ( β= −.19, t=−2.59, p=.010), consistent with thehuman pattern. O3-mini’s regression model ex- plained substantially less variance, only 18.8% (ad- justed R2=.179) [F(2,181) = 20 .94, p < . 001], retaining just two predictors: word frequency (β=.33, t= 4.59, p < . 001) and word length (β=−.19, t=−2.60, p=.010). Among the ensemble models, the Top 5 Models ensemble yielded a significant model [ F(2,219) = 32.89, p < . 001] explaining 23.1%of variance (ad- justed R2=.224), with age of acquisition as the primary predictor ( β=−.35, t=−4.77, p < .001) followed by word frequency ( β=.18, t= 2.42, p=.016). The 100+ Word Types ensem- ble produced a one-predictor model [ F(1,175) = 55.00, p < . 001] with 23.8%of variance explained (adjusted R2=.234), with age of acquisition as the sole predictor ( β=−.49, t=−7.42, p < .001). The All-Model Mix
https://arxiv.org/abs/2505.16164v1
similarly retained only age of acquisition [ F(1,162) = 52 .04, p < . 001], explaining 24.3%of variance (adjusted R2=.238, β=−.49, t=−7.21, p < . 001). 10 Table A1: Overview of LLMs prompted in the letter Ffluency task Provider Model Temperature Prompt Type OpenAI (15 model configurations) gpt-4.1-2025-04-14 0.3, 0.7 (default), 1.1, 1.5Full (all temperatures); No Info, Demographic, Performance (0.7 only) gpt-4o-2024-08-06 0.7 (default) Full gpt-4-turbo-2024-04-09 0.7 (default) Full gpt-3.5-turbo-0125 0.7 (default) Full o4-mini-2025-04-16 (reasoning)– Full, Demographic, Performance o3-2025-04-16 (reasoning)– Full o3-mini-2025-01-31 (reasoning)– Full Anthropic (8 model configurations) claude-3-7-sonnet- 202502190.3, 0.7, 1.0 (default) Full (all temperatures); No Info, Demographic, Performance (0.7 only) claude-3-5-sonnet- 202410221.0 (default) Full claude-3-7-sonnet- 20250219 (thinking enabled)– Full Google (3 model configurations) gemini-2.5-flash-preview- 04-17default Full gemini-2.0-flash default Full gemini-2.5-flash-preview- 04-17 (thinking enabled)default Full xAI (4 model configurations) grok-3-latest 0.7, 1.0 Full grok-3-mini-beta (reasoning effort: high)0.7, 1.0 Full Open-Source Models (using together.ai API; 4 model configurations) meta-llama/Llama-4- Maverick-17B-128E- Instruct-FP8default Full meta-llama/Llama-4- Scout-17B-16E-Instructdefault Full meta-llama/Llama-3.3- 70B-Instruct-Turbodefault Full Qwen/Qwen2.5-72B- Instruct-Turbodefault Full Prompt types: Full = complete demographic and performance information; Demographic = only demographic information (age, education); No-Info = no participant information; Performance = only performance information. 11
https://arxiv.org/abs/2505.16164v1
arXiv:2505.16170v2 [cs.CL] 27 May 2025When Do LLMs Admit Their Mistakes? Understanding the Role of Model Belief in Retraction Yuqing Yang University of Southern California yyang063@usc.eduRobin Jia University of Southern California robinjia@usc.edu Abstract Can large language models (LLMs) admit their mistakes when they should know better? In this work, we define the behavior of acknowledging errors in previously generated answers as “retraction” and aim to understand when and why LLMs choose to retract. We first construct model-specific datasets to evaluate whether a model will retract an incorrect answer that contradicts its own parametric knowl- edge. While LLMs are capable of retraction, they do so only infrequently. We demonstrate that retraction is closely tied to previously identified indicators of models’ internal belief : models fail to retract wrong answers that they “believe” to be factually correct. Steering experiments further demonstrate that internal belief causally influences model retraction. In particular, when the model does not believe its answer, this not only encourages the model to attempt to verify the answer, but also alters attention behavior during self-verification. Finally, we demonstrate that simple supervised fine-tuning significantly improves retraction performance by helping the model learn more accurate internal beliefs. Code and datasets are available on https://github.com/ayyyq/llm-retraction . 1 Introduction Name a politician who w as born in Ne w Y ork City , Unit ed St a t es. Wher e w as Hillar y Clin t on born?Donald J . T rump. A politician born in Ne w Y ork City is Hillar y Clin t on. Car olyn Malone y , liv ed in NY C though born in North Car olina. I know she w as born in Chic ag o , not NY C. Figure 1: indicates a correct answer, indicates a wrong answer, and denotes a retraction. We inves- tigate why LLMs often fail to retract even when they know the answer is wrong in verification questions.Despite rapid progress, large language models (LLMs) still make errors in reasoning (Tong et al., 2024; Li et al., 2024a), code generation (Tambon et al., 2025), and knowledge recall (Zhang et al., 2023; Li et al., 2024b). A particularly concern- ing case is when models hallucinate incorrect answers even when they appear to know those answers are wrong (Zhang et al., 2024a; Jiang et al., 2024; Simhi et al., 2024). For such halluci- nations, an ideal response would be to promptly recognize the error and indicate it is incorrect— an act we define as retraction , as illustrated in Figure 1. Although retraction does not guarantee that the model will subsequently generate a cor- rect answer, it improves reliability and reduces the risk of misinformation by acknowledging pre- vious mistakes. Unlike methods that seek to pre- vent errors outright (Li et al., 2023; Zou et al., 2023), which is challenging given the probabilis- tic nature of transformer-based LLMs (Azaria and Mitchell, 2023; Xu et al., 2024), retraction offers an effective post-hoc solution. In this work, Preprint. Under review. we focus on knowledge-related factoid questions and aim to understand when and why LLMs au- tonomously retract incorrect
https://arxiv.org/abs/2505.16170v2
answers that they know are incorrect. Given the lack of a suitable testbed for studying retraction, we first construct model-dependent “continuation” datasets. We obtain two datasets of knowledge-based questions on which LLMs often hallucinate: a set of constraint satisfaction questions where each question includes two constraints that the answer must satisfy to be correct (e.g., “Name a politician who was born in New York City ”, Dhuliawala et al., 2024; Yüksekgönül et al., 2024), as well as a set of “reversal curse” questions that ask for a celebrity given their lesser-known parents (Berglund et al., 2024). Since these datasets induce many LLM errors, they provide an ideal testbed for studying retraction. For each question, we collect the target model’s self-generated incorrect answers and keep only those where the model disagrees with its initial answer when separately asked a verification question. In this way, we build continuation datasets consisting of question-answer pairs, and then prompt the model to continue generating text after the incorrect answer to see whether it will retract. While models sometimes retract their own incorrect answers, they are generally reluctant to do so, even when they have the requisite knowledge. Next, we show that a model’s decision to retract is closely linked to its internal belief about the answer’s correctness. Prior work has found directions in LLM hidden states that represent models’ internal beliefs about whether a statement is factually correct (Li et al., 2023; Liu et al., 2024). Through probing experiments, we show that these internal beliefs are indicative of models’ retraction decisions: models tend to believe that non-retracted answers are “correct” and retracted ones are “incorrect,” even when this does not align with ground truth. This link is causal: steering the model to believe an answer is correct (“positive” belief steering) suppresses retraction, while steering it to believe an answer is incorrect (“negative” belief steering) strongly promotes retraction. By analyzing the steered models, we identify two separate pathways through which internal beliefs control retraction. Steering in the negative belief direction first encourages the model to try generating additional information (e.g., the birthplace of the entity) for verification, rather than stopping immediately after the answer. Then, negative belief steering increases the model’s attention to the given answer and refines the answer’s attention value vectors, which further promotes retraction. Finally, we show that straightforward supervised fine-tuning (SFT; Prakash et al., 2024) can greatly improve in-distribution retraction performance—the model retracts more incorrect answers while still committing to correct ones. Our findings connecting model belief with retraction generalize to fine-tuned models, as the original belief direction continues to influence retraction behavior after fine- tuning. SFT works by aligning the model’s internal beliefs with ground truth, leading to more accurate retraction decisions. These fine-tuning results highlight the potential of diverse, retraction-focused training data to create LLMs that robustly retract their incorrect answers. To summarize, our contributions are as follows: (1) We construct a model-specific testbed to evaluate an LLM’s retraction performance, and show that current LLMs can retract but do so only rarely. (2) We uncover a connection between a model’s internal belief
https://arxiv.org/abs/2505.16170v2
and its external retraction behavior, and identify the underlying mechanism that governs this behavior. (3) We demonstrate that the causal influence of internal belief on retraction generalizes to supervised fine-tuned models, where more accurate beliefs lead to improved retraction performance. 2 Related Work 2.1 Probing LLMs’ Belief Thebeliefs of LLMs refer to their internal judgments about the truth of the world (Levinstein and Herrmann, 2023; Schouten et al., 2024). Prior work typically deciphers these beliefs by probing internal representations (Li et al., 2023; Azaria and Mitchell, 2023; Marks and Tegmark, 2023a; Burns et al., 2023). Most relevant to our work, Liu et al. (2024) propose the existence of a universal truthfulness hyperplane that can separate true and false statements based on a model’s internal representations. We note that these probing methods may not always accurately distinguish true and false statements. Many studies focus on distinguishing synthetically constructed true-false claims, which may not reflect the distribution of hallucinations in real LLM outputs. More critically, while some research demonstrates strong performance in detecting hallucinations on in-distribution, model-generated data 2 (Azaria and Mitchell, 2023; CH-Wang et al., 2024; Orgad et al., 2024), they fail to generalize to out-of-distribution examples (Liu et al., 2024; Levinstein and Herrmann, 2023). Our work provides a possible explanation: low out-of-distribution probe accuracy may reflect the need for a finer-grained classification of incorrect answers, such as distinguishing between retracted and non-retracted ones. 2.2 Self-Correction in LLMs Retraction marks an important step of self-correction, in which a model must first identify its mistake, then subsequently produce a correct answer or a more accurate reasoning step. While there has been debate over whether LLMs can truly self-correct in reasoning (Huang et al., 2024; Tyen et al., 2024), it has been demonstrated that self-correction is possible and exceptionally effective when the model has necessary knowledge (Dhuliawala et al., 2024). However, previous work on self-correction primarily relies on multi-turn procedures, such as asking the model verification questions (Dhuliawala et al., 2024; Wu et al., 2024), prompting it to give feedback (Madaan et al., 2023; Zhang et al., 2024b; Liu et al., 2023), or directly instructing it to verify its initial responses (Kadavath et al., 2022; Yang et al., 2024a). These approaches are not fully automatic. In contrast, our focus is to evaluate and understand LLMs’ capabilities to autonomously identify and admit their own mistakes, thus initiating the self-correction process without explicit external prompting. 3 Task Definition and Preliminary Results 3.1 Task Definition Retraction denotes a model’s immediate acknowledgment that its generated answer is incorrect or does not fully satisfy the user’s requirements, regardless of whether it later produces the correct answer. To evaluate the retraction performance of current LLMs, we construct a model-specific testbed. We first collect questions from two knowledge-related datasets, WIKIDATA (e.g., “Name a writer who was born in Oran, Algeria”) and CELEBRITY (e.g., “Name a child of Joe Jackson”), which are prone to eliciting wrong answers, thereby creating a great opportunity to study retraction. Details of these two original datasets are provided in Appendix B.1. Continuation Dataset Construction. Based on the collected
https://arxiv.org/abs/2505.16170v2
questions, we construct model- specific “continuation” datasets. Each example pairs a question with a model-generated answer, and we prompt the model to continue generating to evaluate whether it will retract, as illustrated below: USER: Name a politician who was born in New York City. ASSISTANT: Hillary Clinton [Model generation continues from here...] To ensure that each incorrect answer is, in principle, correctable by the target LLM, we first collect the model’s freely generated answers using temperature sampling. For each answer, we create verification questions (e.g., Where was {model’s answer} born? What is {model’s answer}’s profession?) and assess whether the model’s responses to these questions conflict with the constraints of the original question. We retain two types of examples: •Correct Examples: The answer is factually correct, and the model can correctly answer all verification questions. •Wrong Examples: The answer is factually incorrect, and the model’s responses to the verification questions contradict the original question. Correct examples are used to evaluate over-retraction and for in-distribution SFT in Section 6. We also create train/test split: all results are evaluated on the test set, while the training set is used for SFT in Section 6. We conduct experiments using three popular LLMs from different model families, Llama3.1-8B-Instruct (Dubey et al., 2024, abbr. Llama3.1-8B), Qwen2.5-7B-Instruct (Yang et al., 2024b, abbr. Qwen2.5-7B), and Olmo2-1124-7B-Instruct (OLMo et al., 2025, abbr. Olmo2-7B). The data statistics are listed in Table 1. In the following sections, we use WIKIDATA andCELEBRITY to denote the model-specific continuation datasets instead of the original datasets. 3 Llama3.1-8B Qwen2.5-7B Olmo2-7B # Train # Test # Train # Test # Train # Test WIKIDATA 1934 1202 1496 1072 1796 1260 CELEBRITY 1550 826 – 1142 – 1209 Table 1: Continuation dataset statistics. Note that Qwen2.5-7B and Olmo2-7B do not have training sets for CELEBRITY due to an insufficient number of correct examples. See Appendix B.2 for details. Evaluation Metrics. We use Llama3.3-70B-Instruct1as a judge (Zheng et al., 2023) to automati- cally assess whether the target model retracts the given answer in its response. See Appendix B.3 for details. We then calculate the following two metrics to evaluate the model’s retraction performance: Retraction Recall =|Wrong & Retraction | |Wrong |, Retraction Precision =|Wrong & Retraction | |Retraction |. |Wrong |denotes the number of wrong examples, and |Retraction |indicates the number of examples that the target model retracts according to the judgment of Llama3.3-70B-Instruct. Higher retraction recall and precision together represent better retraction performance. 3.2 Models Can Retract, but Do So Infrequently Llama3.1-8B Qwen2.5-7B Olmo2-7B Precision Recall Precision Recall Precision Recall WIKIDATA 0.9012 0.2529 0.8824 0.1119 0.9881 0.1317 CELEBRITY 0.7722 0.1477 0.9667 0.0290 0.8824 0.0150 Table 2: Retraction performance on the WIKIDATA andCELEBRITY test sets across different LLMs. As shown in Table 2, models consistently have low but non-zero retraction recall on our datasets. We infer that LLMs have the capability to retract incorrect answers, but the consistently low recall (at most 25%) highlights that such retractions are rare. Recall that our verification questions provide clear evidence that the model knows that the incorrect answers in our datasets are indeed
https://arxiv.org/abs/2505.16170v2
incorrect. Thus, it appears the model has both the knowledge and the ability to retract. Then, why do LLMs fail to retract more incorrect answers? What factors govern their retraction behavior? 4 Model Belief Guides Retraction To understand why LLMs are often unwilling to retract their own incorrect answers, we first investigate whether LLMs actually know that incorrect answers are incorrect in the context of answering the question. To explore this, we identify a universal belief hyperplane and examine whether the model’s internal beliefs align with factual correctness. By belief , we refer to the model’s internal assessment of whether an answer is correct or incorrect, which may not always match the ground truth. 4.1 Probing for Internal Beliefs Universal Truthfulness Dataset. Following prior work (Li et al., 2023; Azaria and Mitchell, 2023; Marks and Tegmark, 2023a), we infer a model’s internal beliefs through probing. Liu et al. (2024) show that training on a diverse set of datasets can reveal a universal belief hyperplane, rather than overfitting to in-distribution patterns. Motivated by this, we use a subset of the dataset collection from Liu et al. (2024) to train our probes, including 800 examples each from Natural Questions (Kwiatkowski et al., 2019), Trivia QA (Joshi et al., 2017), and SciQ (Welbl et al., 2017). These datasets are all short-answer closed-book QA tasks and share a similar format with WIKIDATA and CELEBRITY . Each dataset includes a 50/50 split of correct and incorrect answers, with the incorrect answers generated by GPT-4-turbo. We denote this collection as Universal Truthfulness QA dataset (UTQA). 1https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct 4 0 4 8 12 16 20 24 28 Layer0.00.20.40.60.81.0Avg. Probe Score CN CR WN WR(a) Llama3.1-8B on W IKIDATA . 0 4 8 12 16 20 24 28 Layer0.00.20.40.60.81.0Avg. Probe Score CN CR WN WR (b) Llama3.1-8B on C ELEBRITY . Figure 2: Average probe scores across layers for Llama3.1-8B, grouped by factual correctness and retraction behavior. Higher scores indicate that the model internally believes the answer to be correct. For each layer of the target LLM like Llama3.1-8B, we train a separate linear probe on the UTQA training set based on that layer’s hidden states after the given answer. These probes learn to distinguish correct and incorrect answers on UTQA, and thus serve as a proxy for the model’s internal beliefs: lower prediction scores indicate the model believes the answer is incorrect, while higher prediction scores suggest it believes the answer is correct. We then apply the probes to examples from WIKIDATA andCELEBRITY , group them according to the answer’s factual correctness and the model’s retraction behavior, and plot the prediction scores across layers. The four groups are defined as follows: •CN: the answer is factually c orrect and the model does n ot retract it. •CR: the answer is factually c orrect and the model r etracts it. •WR: the answer is w rong (factually incorrect) and the model r etracts it. •WN: the answer is w rong (factually incorrect) and the model does n ot retract it. From Figure 2, we find that these belief probes assign high
https://arxiv.org/abs/2505.16170v2
scores to CN and WN examples, and low scores to CR and WR examples. This indicates that the model’s beliefs do not align with factual correctness, but instead align more closely with its retraction behavior: low belief scores correspond to retraction, while high scores correspond to non-retraction. For example, WN examples contain factually incorrect answers, yet the probes assign them high belief scores, implying that the model believes these answers to be correct. A similar trend is observed for Qwen2.5-7B and Olmo2-7B, as shown in Appendix C.1. 4.2 Steering Internal Beliefs Affects Retraction Our probing results establish a correlation between the model’s internal beliefs and its retraction behavior. To demonstrate that internal beliefs causally influence retraction behavior, we steer the model’s hidden states towards belief+ (i.e., believe an answer is correct) and belief- (i.e., believe an answer is incorrect) directions. Activation Steering. We still use the UTQA training set to find steering directions. For each layer l∈ |L|, we calculate the mean hidden states h+ lfor correct answers at the last token of the answer, andh− lfor incorrect answers. We then compute the different-in-means vector vl=h+ l−h− l(Li et al., 2023; Marks and Tegmark, 2023b; Arditi et al., 2024), which represents a linear belief direction . We add or subtract this difference-in-means vector to the activations of a new answer, thereby shifting the model’s perception of the correctness of the answer: h′ l←hl+αv l, where α represents the strength of steering. Note that we steer only at the last token of the answer; we do not add the steering vector at any following generation steps in order to minimize disruption to the model’s natural generation. Similar to prior work (Li et al., 2023; Turner et al., 2023; Lee et al., 2025), we manually search for the steering hyperparameters to ensure that the steering is effective and minimally invasive, as detailed in Appendix B.4. Results. We present retraction rate (i.e., the proportion of retracted examples) in Figure 3 for clarity and provide detailed retraction recall and precision in Appendix Table 12, 13, and 14. From Figure 3, 5 0.0 0.2 0.4 0.6 0.8 1.0 Retraction rateLlama3.1-8B WikidataLlama3.1-8B CelebrityQwen2.5-7B WikidataQwen2.5-7B CelebrityOlmo2-7B WikidataOlmo2-7B CelebrityNo Steering Belief- Belief+Figure 3: Retraction Rate under belief steering. we can see that across all three models and two datasets, belief steering effectively controls retraction behavior in both directions. Specifically, strengthening the model’s belief in the negative direction causes it to retract over 70% of the time across the entire dataset. In contrast, when we strengthen the model’s belief in the positive direction, retraction rate drops to nearly zero, indicating the model rarely retracts. This supports our hypothesis about the role of model belief in retraction: an LLM tends to take back an answer only when it internally believes it is incorrect; otherwise, it is like to stand by its initial answer. We note that other steering directions, e.g., ones directly derived from in-distribution data, can yield similar results as detailed in Appendix C.2. However, these directions often fail to generalize to out-of-distribution settings. Importantly, our goal is not to find
https://arxiv.org/abs/2505.16170v2
the optimal steering direction. Instead, we aim to understand when and why LLMs choose to retract. Both the probing and steering results support the conclusion that the model’s belief—defined independently of retraction and trained on separate data—causally affects retraction behavior and generalizes across different datasets. 5 Mechanistic Analysis Having established that retraction behavior is guided by LLMs’ internal beliefs, we now turn to a deeper investigation of how beliefs function. In this section, we explore the mechanisms through which beliefs shape model behavior, from surface-level token generation to deeper attention dynamics. 5.1 Internal Beliefs Influence the Decision to Stop Generating First, we find that belief steering controls whether the model stops generation immediately after the given answer. If the model outputs a “ .” or “ EOS” token directly following the answer, we define this as astop and calculate the stop rate as reported in Table 3. Llama3.1-8B Qwen2.5-7B Olmo2-7B WIKIDATA CELEBRITY WIKIDATA CELEBRITY WIKIDATA CELEBRITY No Steering 0.7413 0.6041 0.0028 0.0271 0.0563 0.1960 Belief- 0.0017 0.0206 0.0271 0.0096 0.0000 0.0000 Belief+ 0.9867 0.8765 0.4310 0.8126 0.9992 0.9992 Table 3: Stop Rate, which refers to the proportion of examples where the model stops generating right after the given answer. We observe that positive belief steering increases stop rate, suggesting that when the model believes the answer is true, it is more likely to terminate generation early, foregoing the opportunity to verify the answer. In contrast, negative belief steering reduces stop rate: the model tends to generate additional information like the entity’s birthplace and profession, which encourages it to reflect on and potentially challenge its initial answer, even if ultimately retraction does not occur. 6 WIKIDATA CELEBRITY Precision Recall Precision Recall No Steering 0.9012 0.2579 0.7722 0.1477 Appending “ is” No Steering 0.8254 0.5740 0.8310 0.1429 Belief- 0.5026 0.9717 0.4836 0.8232 Belief+ 0.9847 0.3211 0.8108 0.0726 Table 4: Retraction performance for Llama3.1-8B under the is-appended setting. At the same time, belief steering does more than just changing immediate next token, as evidenced by the stop rate of Qwen2.5-7B and Olmo2-7B. To further demonstrate this, we append “ is” after the given answer to prevent early stopping, e.g., “Hillary Clinton is[Model generation continues from here...] ”. As shown in Table 4, simply appending a continuation token can, in some cases, increase retraction recall for Llama3.1-8B, leading to improved retraction performance. Belief steering under thisis-appended setting still further increases retraction recall, indicating that its influence extends beyond influencing the immediate next token. 5.2 Beliefs Influence Retraction at Later Tokens Primarily via Attention Value Vectors So far, we have shown that belief steering influences retraction behavior after the token following the answer. Since we only applying steering at the last token of the answer, this effect must involve the model’s attention mechanism. Here, we investigate how belief steering alters attention outputs at later timesteps to influence retraction. Belief steering changes attention weights. We start by measuring how belief steering changes attention weights. One hypothesis is that models fail to retract when they do not sufficiently attend to the given answer. To see if belief steering influences
https://arxiv.org/abs/2505.16170v2
retraction by modulating attention to the given answer, we calculate the attention weights from the last token of the answer to the answer span. Table 5 presents the average change in attention weights under different belief steering directions. Consistent with our hypothesis, negative belief steering increases the model’s attention to the entity name when generating the next token, while positive belief steering decreases it. Llama3.1-8B Qwen2.5-7B Olmo2-7B WIKI. C ELEB . W IKI. C ELEB . W IKI. C ELEB . No Steering →Belief- 0.0329 0.0369 0.0413 0.0307 0.0360 0.0350 No Steering →Belief+ -0.0056 -0.0110 -0.0018 -0.0093 -0.0019 -0.0051 Table 5: Change in attention weights to the answer span. Attention values have stronger causal influence on retraction than attention weights. Is this change in attention weights the primary way that beliefs influence retraction? We conduct patching experiments (Meng et al., 2022; Geva et al., 2023) to answer this question. Instead of directly adding steering vectors to the hidden states of each layer, we selectively retain specific components, such as attention weights or attention value vectors, from the steered model, and patch them into an unsteered model. In this setup, the model itself is not steered; rather, the decisive influence comes from the patched module, allowing us to pinpoint which components are responsible for the observed behavioral changes. We experiment with patching attention weights from salient heads (i.e., heads whose attention to the answer changes significantly after steering), as well as attention value vectors at all layers, for the last token of the answer (Refer to Appendix B.5 for implementation details). We present patching results for Llama3.1-8B in Table 6. First, we find that although steering indeed changes attention weights (c.f., Table 5), patching attention weights alone has a relatively minor impact on retraction recall, especially under negative steering. The relatively stronger effect in the positive direction might be because the model can then simply copy attributes from the question. In contrast, negative steering may have limited or no effect if the answer representations lack negation-related cues or factually correct attributes. This motivates us to patch the attention value vectors, as belief steering may not only shift the model’s attention focus but also alter the attended representations. 7 WIKIDATA CELEBRITY Prec. Rec. Prec. Rec. No Steer 0.9012 0.2579 0.7722 0.1477 belief- Patch W 0.8325 0.2729 0.7113 0.1671 Patch V 0.5249 0.5441 0.6351 0.3245 Full Steer 0.5157 0.9268 0.4803 0.7676 belief+ Patch W 0.8984 0.1913 0.7333 0.1065 Patch V 0.9700 0.1614 0.6552 0.0920 Full Steer 1.0000 0.0067 0.5217 0.0291 Table 6: Patching results for Llama3.1-8B on continu- ation test sets.WIKIDATA CELEBRITY Prec. Rec. Prec. Rec. No Steer 0.8254 0.5740 0.8310 0.1429 belief- Patch W 0.7694 0.5940 0.8228 0.1574 Patch V 0.5069 0.9784 0.5055 0.5569 Full Steer 0.5026 0.9717 0.4836 0.8232 belief+ Patch W 0.8261 0.5691 0.8261 0.1380 Patch V 0.9851 0.3311 0.7955 0.0847 Full Steer 0.9847 0.3211 0.8108 0.0726 Table 7: Patching results for Llama3.1-8B under the is-appended setting. Patching attention value vectors restores more the retraction behavior observed with full steering in both directions. This implies that belief steering primarily acts by modifying
https://arxiv.org/abs/2505.16170v2
the internal repre- sentation of the answer, in addition to affecting next token prediction. In Table 7, we also present patching results for Llama3.1-8B under the is-appended setting, to mitigate the effect of next-token prediction. When this influence is reduced, attention value vectors play a more prominent role. This is also verified by experiments on Qwen2.5-7B and Olmo2-7B as shown in Appendix C.3. 6 Supervised Fine-Tuning Can Learn Better Internal Belief Given that supervised fine-tuning can enhance existing capabilities of LLMs (Prakash et al., 2024; Yang et al., 2024c), do our findings on the role of model beliefs in retraction generalize to fine- tuning models with enhanced retraction performance? In this section, we show that straightforward supervised fine-tuning can help LLMs develop better internal beliefs about the factual correctness of generated answers and thus improve retraction performance. WIKIDATA CELEBRITY Precision Recall Precision Recall Baseline 0.9012 0.2529 0.7722 0.1477 SFT 0.7815 0.8453 0.8988 0.9031 Belief- for SFT 0.5013 1.0000 0.5092 1.0000 Belief+ for SFT 0.9144 0.2845 0.9407 0.5763 Table 8: In-distribution supervised fine-tuning results and follow-up steering performance for LLaMA3.1-8B. Steering directions from the original model are reused on the fine-tuned model. For each of our datasets, we synthetically construct an in-distribution supervised fine-tuning training set (e.g., using the WIKIDATA training set). Specifically, we append “ is the correct answer.” to correct examples that contain factually correct answers in the training dataset, and “ is not the correct answer.” to wrong examples, and use LoRA (Hu et al., 2022) to fine-tune LLMs. The results of Llama3.1-8B are shown in Table 8. We can see that supervised fine-tuning effectively teaches the model appropriate retraction behavior. The model learns to distinguish between factually correct and incorrect answers and respond accordingly, i.e., saying “is the correct answer” to correct answers and saying “is not the correct answer” to incorrect ones. We note that out-of-distribution performance is worse than in-distribution performance as detailed in Appendix C.4.1; we view our findings as a proof-of-concept and hypothesize that a larger and more diverse training dataset could yield robust retraction capabilities. Then we apply the same belief steering vectors from the original model and the same hyperparameters2 to steer the fine-tuned model. As shown in Table 8, the steering vectors can be generalized to the 2Note that these may not be the optimal hyperparameters. In fact, extending steering from layers 6-14 to 6-20 reduces retraction recall on Belief+ CELEBRITY set from 0.5763 to 0.2300 with no change in response format. 8 fine-tuned model and change its retraction behavior in both directions, without altering its response format, i.e., “is (not) the correct answer”. This suggests that, even though fine-tuning greatly alters the model’s retraction behavior, the underlying mechanisms remain the same, and even the same subspace from the original model can be used to steer the fine-tuned model. Similar results for Qwen2.5-7B and Olmo2-7B, presented in Appendix C.4.2, further confirm this observation. 0 4 8 12 16 20 24 28 Layer0.30.40.50.60.70.80.9Avg. Probe Score Base-C Base-W SFT-C SFT-W (a) Llama3.1-8B on W IKIDATA . 0 4 8 12 16 20 24 28
https://arxiv.org/abs/2505.16170v2
Layer0.20.30.40.50.60.70.8Avg. Probe Score Base-C Base-W SFT-C SFT-W (b) Llama3.1-8B on C ELEBRITY . Figure 4: Average probe scores across layers for Llama3.1-8B (Base) and its fine-tuned variant (SFT). “C” denotes correct examples, and “W” denotes wrong examples. Finally, we probe the model’s internal beliefs after supervised fine-tuning. As shown in Figure 4, factually incorrect answers previously receive high probe prediction scores, indicating that the model tends to treat them as correct. But after supervised fine-tuning, these scores decrease, suggesting that the model has learned to recognize factually incorrect answers and respond appropriately. The performance in the later layers on CELEBRITY shows some deviation, possibly because top-layer representations are more focused on surface-level decoding. Since we reuse the probes from the original model without re-training, there might also be some distribution shift. Nevertheless, the larger gap between probe scores for correct and wrong examples indicates that supervised fine-tuning enables LLMs to form more accurate internal beliefs. 7 Conclusions and Limitations In this paper, we evaluate and analyze the underlying mechanisms behind retraction in LLMs. Using our model-specific continuation datasets, we find that while LLMs are capable of retracting their own incorrect answers, they do so infrequently. Through probing and steering experiments, we demonstrate that retraction is causally influenced by the model’s internal belief: a model fails to retract an incorrect answer because it genuinely believes it is correct. We further show that beliefs guide retraction by affecting both the surface-level token predictions and deeper attention dynamics. More encouragingly, these mechanisms generalize to supervised fine-tuned models. Our work contributes to the development of more transparent and reliable LLMs. There are several limitations for future research. First, although different LLMs share the same overall retraction mechanism—being causally influenced by the model’s internal belief—the specific layers where this influence is most pronounced vary across models. As shown in Appendix B.4, steering at early to mid layers is effective for Llama3.1-8B and Qwen2.5-7B, whereas Olmo2-7B requires intervention at higher layers to elicit stronger retraction. These differences likely stem from variations in training recipes, including data and optimization strategies used. Second, our analysis focuses on short-answer knowledge-related question answering tasks. One natural extension is to long-form generation, such as “Name 15 politicians who were born in New York City”. This introduces new challenges, including how to accurately locate each generated answer and how to isolate the influence of earlier outputs on later ones. Moreover, as many self-correction studies target reasoning tasks, it would be valuable to examine whether our findings generalize to that domain. However, caution is needed to disentangle limitations in retraction from other capabilities required for reasoning tasks, such as arithmetic computation and problem understanding. 9 Acknowledgments and Disclosure of Funding Sincere thanks to Ting-Yun Chang and everyone in Allegro Lab. This work was supported in part by the National Science Foundation under Grant No. IIS-2403436. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. References Yongqi Tong, Dawei Li, Sizhe Wang, Yujia Wang, Fei Teng, and
https://arxiv.org/abs/2505.16170v2
Jingbo Shang. Can llms learn from previous mistakes? investigating llms’ errors to boost for reasoning. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 3065–3080. Association for Computational Linguistics, 2024. doi: 10.18653/ V1/2024.ACL-LONG.169. URL https://doi.org/10.18653/v1/2024.acl-long.169 . Xiaoyuan Li, Wenjie Wang, Moxin Li, Junrong Guo, Yang Zhang, and Fuli Feng. Evaluating mathematical reasoning of large language models: A focus on error identification and correction. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , pages 11316–11360. Association for Computational Linguistics, 2024a. doi: 10.18653/V1/2024. FINDINGS-ACL.673. URL https://doi.org/10.18653/v1/2024.findings-acl.673 . Florian Tambon, Arghavan Moradi Dakhel, Amin Nikanjam, Foutse Khomh, Michel C. Desmarais, and Giuliano Antoniol. Bugs in large language models generated code: an empirical study. Empir. Softw. Eng. , 30(3):65, 2025. doi: 10.1007/S10664-025-10614-4. URL https://doi.org/10. 1007/s10664-025-10614-4 . Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming Shi. Siren’s song in the AI ocean: A survey on hallucination in large language models. CoRR , abs/2309.01219, 2023. doi: 10.48550/ARXIV .2309.01219. URL https://doi.org/10.48550/ arXiv.2309.01219 . Junyi Li, Jie Chen, Ruiyang Ren, Xiaoxue Cheng, Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. The dawn after the dark: An empirical study on factuality hallucination in large language models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2024, Bangkok, Thailand, August 11-16, 2024 , pages 10879–10899. Association for Computational Linguistics, 2024b. doi: 10.18653/V1/2024.ACL-LONG.586. URL https://doi.org/10. 18653/v1/2024.acl-long.586 . Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A. Smith. How language model hallucinations can snowball. In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenReview.net, 2024a. URL https://openreview. net/forum?id=FPlaQyAGHu . Che Jiang, Biqing Qi, Xiangyu Hong, Dayuan Fu, Yang Cheng, Fandong Meng, Mo Yu, Bowen Zhou, and Jie Zhou. On large language models’ hallucination with regard to known facts. In Kevin Duh, Helena Gómez-Adorno, and Steven Bethard, editors, Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), NAACL 2024, Mexico City, Mexico, June 16-21, 2024 , pages 1041–1053. Association for Computational Linguistics, 2024. doi: 10.18653/V1/2024. NAACL-LONG.60. URL https://doi.org/10.18653/v1/2024.naacl-long.60 . Adi Simhi, Jonathan Herzig, Idan Szpektor, and Yonatan Belinkov. Distinguishing ignorance from error in LLM hallucinations. CoRR , abs/2410.22071, 2024. doi: 10.48550/ARXIV .2410.22071. URL https://doi.org/10.48550/arXiv.2410.22071 . Kenneth Li, Oam Patel, Fernanda B. Viégas, Hanspeter Pfister, and Martin Wattenberg. Inference- time intervention: Eliciting truthful answers from a language model. In Alice Oh, Tris- tan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, 10 Advances in Neural Information Processing Systems 36: Annual Conference on Neural In- formation Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023
https://arxiv.org/abs/2505.16170v2
, 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/ 81b8390039b7302c909cb769f8b6cd93-Abstract-Conference.html . Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, and Dan Hendrycks. Representation engineering: A top-down approach to AI transparency. CoRR , abs/2310.01405, 2023. doi: 10.48550/ARXIV .2310.01405. URL https: //doi.org/10.48550/arXiv.2310.01405 . Amos Azaria and Tom M. Mitchell. The internal state of an LLM knows when it’s lying. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023 , pages 967–976. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.FINDINGS-EMNLP.68. URL https: //doi.org/10.18653/v1/2023.findings-emnlp.68 . Ziwei Xu, Sanjay Jain, and Mohan S. Kankanhalli. Hallucination is inevitable: An innate limitation of large language models. CoRR , abs/2401.11817, 2024. doi: 10.48550/ARXIV .2401.11817. URL https://doi.org/10.48550/arXiv.2401.11817 . Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, Roberta Raileanu, Xian Li, Asli Celikyilmaz, and Jason Weston. Chain-of-verification reduces hallucination in large language models. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , pages 3563– 3578. Association for Computational Linguistics, 2024. doi: 10.18653/V1/2024.FINDINGS-ACL. 212. URL https://doi.org/10.18653/v1/2024.findings-acl.212 . Mert Yüksekgönül, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, and Besmira Nushi. Attention satisfies: A constraint-satisfaction lens on factual errors of language models. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. URL https://openreview.net/forum?id=gfFVATffPd . Lukas Berglund, Meg Tong, Maximilian Kaufmann, Mikita Balesni, Asa Cooper Stickland, Tomasz Korbak, and Owain Evans. The reversal curse: Llms trained on "a is b" fail to learn "b is a". In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. URL https://openreview.net/forum?id=GPKTIktA0k . Junteng Liu, Shiqi Chen, Yu Cheng, and Junxian He. On the universal truthfulness hyperplane inside llms. In Yaser Al-Onaizan, Mohit Bansal, and Yun-Nung Chen, editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024, Miami, FL, USA, November 12-16, 2024 , pages 18199–18224. Association for Computational Linguistics, 2024. URL https://aclanthology.org/2024.emnlp-main.1012 . Nikhil Prakash, Tamar Rott Shaham, Tal Haklay, Yonatan Belinkov, and David Bau. Fine-tuning enhances existing mechanisms: A case study on entity tracking. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenRe- view.net, 2024. URL https://openreview.net/forum?id=8sKcAWOf2D . Benjamin A. Levinstein and Daniel A. Herrmann. Still no lie detector for language models: Probing empirical and conceptual roadblocks. CoRR , abs/2307.00175, 2023. doi: 10.48550/ARXIV .2307. 00175. URL https://doi.org/10.48550/arXiv.2307.00175 . Stefan F. Schouten, Peter Bloem, Ilia Markov, and Piek V ossen. Truth-value judgment in language models: belief directions are context sensitive. CoRR , abs/2404.18865, 2024. doi: 10.48550/ ARXIV .2404.18865. URL https://doi.org/10.48550/arXiv.2404.18865 . Samuel Marks and Max Tegmark. The geometry of truth: Emergent linear structure in large language model representations of true/false datasets. CoRR , abs/2310.06824, 2023a. doi: 10.48550/ARXIV . 2310.06824. URL https://doi.org/10.48550/arXiv.2310.06824 . 11 Collin Burns, Haotian Ye,
https://arxiv.org/abs/2505.16170v2
Dan Klein, and Jacob Steinhardt. Discovering latent knowledge in language models without supervision. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net, 2023. URL https://openreview.net/forum?id=ETKGuby0hcs . Sky CH-Wang, Benjamin Van Durme, Jason Eisner, and Chris Kedzie. Do androids know they’re only dreaming of electric sheep? In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , pages 4401–4420. Association for Computational Linguistics, 2024. doi: 10.18653/V1/2024.FINDINGS-ACL.260. URL https://doi.org/10.18653/v1/ 2024.findings-acl.260 . Hadas Orgad, Michael Toker, Zorik Gekhman, Roi Reichart, Idan Szpektor, Hadas Kotek, and Yonatan Belinkov. Llms know more than they show: On the intrinsic representation of LLM hallucinations. CoRR , abs/2410.02707, 2024. doi: 10.48550/ARXIV .2410.02707. URL https: //doi.org/10.48550/arXiv.2410.02707 . Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. Large language models cannot self-correct reasoning yet. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net, 2024. URL https://openreview.net/forum?id=IkmD3fKBPQ . Gladys Tyen, Hassan Mansoor, Victor Carbune, Peter Chen, and Tony Mak. Llms cannot find reasoning errors, but can correct them given the error location. In Lun-Wei Ku, Andre Martins, and Vivek Srikumar, editors, Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , pages 13894–13908. Association for Computational Linguistics, 2024. doi: 10.18653/V1/2024.FINDINGS-ACL.826. URL https: //doi.org/10.18653/v1/2024.findings-acl.826 . Zhenyu Wu, Qingkai Zeng, Zhihan Zhang, Zhaoxuan Tan, Chao Shen, and Meng Jiang. Large language models can self-correct with minimal effort. CoRR , abs/2405.14092, 2024. doi: 10. 48550/ARXIV .2405.14092. URL https://doi.org/10.48550/arXiv.2405.14092 . Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, and Peter Clark. Self-refine: Iterative refinement with self-feedback. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 , 2023. URL http://papers.nips.cc/paper_files/ paper/2023/hash/91edff07232fb1b55a505a9e9f6c0ff3-Abstract-Conference.html . Qingjie Zhang, Han Qiu, Di Wang, Haoting Qian, Yiming Li, Tianwei Zhang, and Minlie Huang. Understanding the dark side of llms’ intrinsic self-correction. CoRR , abs/2412.14959, 2024b. doi: 10.48550/ARXIV .2412.14959. URL https://doi.org/10.48550/arXiv.2412.14959 . Tengxiao Liu, Qipeng Guo, Yuqing Yang, Xiangkun Hu, Yue Zhang, Xipeng Qiu, and Zheng Zhang. Plan, verify and switch: Integrated reasoning with diverse x-of-thoughts. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023 , pages 2807–2822. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.EMNLP-MAIN.169. URL https://doi.org/10.18653/v1/2023.emnlp-main.169 . Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam
https://arxiv.org/abs/2505.16170v2
Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. Language models (mostly) know what they know. CoRR , abs/2207.05221, 2022. doi: 10.48550/ARXIV .2207.05221. URL https://doi.org/10.48550/arXiv.2207.05221 . 12 Zhe Yang, Yichang Zhang, Yudong Wang, Ziyao Xu, Junyang Lin, and Zhifang Sui. Confidence v.s. critique: A decomposition of self-correction capability for llms. CoRR , abs/2412.19513, 2024a. doi: 10.48550/ARXIV .2412.19513. URL https://doi.org/10.48550/arXiv.2412.19513 . Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mitra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, Arun Rao, Aston Zhang, Aurélien Rodriguez, Austen Gregerson, Ava Spataru, Baptiste Rozière, Bethany Biron, Binh Tang, Bobbie Chern, Charlotte Caucheteux, Chaya Nayak, Chloe Bi, Chris Marra, Chris McConnell, Christian Keller, Christophe Touret, Chunyang Wu, Corinne Wong, Cristian Canton Ferrer, Cyrus Nikolaidis, Damien Allonsius, Daniel Song, Danielle Pintz, Danny Livshits, David Esiobu, Dhruv Choudhary, Dhruv Mahajan, Diego Garcia-Olano, Diego Perino, Dieuwke Hupkes, Egor Lakomkin, Ehab AlBadawy, Elina Lobanova, Emily Dinan, Eric Michael Smith, Filip Radenovic, Frank Zhang, Gabriel Synnaeve, Gabrielle Lee, Georgia Lewis Anderson, Graeme Nail, Grégoire Mialon, Guan Pang, Guillem Cucurell, Hailey Nguyen, Hannah Korevaar, Hu Xu, Hugo Touvron, Iliyan Zarov, Imanol Arrieta Ibarra, Isabel M. Kloumann, Ishan Misra, Ivan Evtimov, Jade Copet, Jaewon Lee, Jan Geffert, Jana Vranes, Jason Park, Jay Mahadeokar, Jeet Shah, Jelmer van der Linde, Jennifer Billock, Jenny Hong, Jenya Lee, Jeremy Fu, Jianfeng Chi, Jianyu Huang, Jiawen Liu, Jie Wang, Jiecao Yu, Joanna Bitton, Joe Spisak, Jongsoo Park, Joseph Rocca, Joshua Johnstun, Joshua Saxe, Junteng Jia, Kalyan Vasuden Alwala, Kartikeya Upasani, Kate Plawiak, Ke Li, Kenneth Heafield, Kevin Stone, and et al. The llama 3 herd of models. CoRR , abs/2407.21783, 2024. doi: 10.48550/ARXIV .2407.21783. URL https://doi.org/10.48550/arXiv.2407.21783 . An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, Kai Dang, Keming Lu, Keqin Bao, Kexin Yang, Le Yu, Mei Li, Mingfeng Xue, Pei Zhang, Qin Zhu, Rui Men, Runji Lin, Tianhao Li, Tingyu Xia, Xingzhang Ren, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yu Wan, Yuqiong Liu, Zeyu Cui, Zhenru Zhang, and Zihan Qiu. Qwen2.5 technical report. CoRR , abs/2412.15115, 2024b. doi: 10.48550/ARXIV .2412.15115. URL https://doi.org/10.48550/arXiv.2412.15115 . Team OLMo, Pete Walsh, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Shane Arora, Akshita Bhagia, Yuling Gu, Shengyi Huang, Matt Jordan, Nathan Lambert, Dustin Schwenk, Oyvind Tafjord, Taira Anderson, David Atkinson, Faeze Brahman, Christopher Clark, Pradeep Dasigi, Nouha Dziri, Michal Guerquin, Hamish Ivison, Pang Wei Koh, Jiacheng Liu, Saumya Malik, William Merrill, Lester James V . Miranda, Jacob Morrison, Tyler Murray, Crystal Nam, Valentina Pyatkin, Aman Rangapur, Michael Schmitz, Sam Skjonsberg, David Wadden, Christopher Wilhelm, Michael Wilson, Luke Zettlemoyer, Ali Farhadi, Noah A. Smith, and Hannaneh Hajishirzi. 2 olmo 2 furious. CoRR , abs/2501.00656, 2025. doi: 10.48550/ARXIV .2501.00656. URL https: //doi.org/10.48550/arXiv.2501.00656 . Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric
https://arxiv.org/abs/2505.16170v2
P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Ad- vances in Neural Information Processing Systems 36: Annual Conference on Neural In- formation Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023 , 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/ 91f18a1287b398d378ef22505bf41832-Abstract-Datasets_and_Benchmarks.html . Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur P. Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguistics , 7:452–466, 2019. doi: 10.1162/TACL\_A\_00276. URL https://doi.org/10. 1162/tacl_a_00276 . Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. In Regina Barzilay and Min-Yen Kan, editors, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 - August 4, Volume 1: Long Papers , pages 1601–1611. 13 Association for Computational Linguistics, 2017. doi: 10.18653/V1/P17-1147. URL https: //doi.org/10.18653/v1/P17-1147 . Johannes Welbl, Nelson F. Liu, and Matt Gardner. Crowdsourcing multiple choice science questions. In Leon Derczynski, Wei Xu, Alan Ritter, and Tim Baldwin, editors, Proceedings of the 3rd Workshop on Noisy User-generated Text, NUT@EMNLP 2017, Copenhagen, Denmark, September 7, 2017 , pages 94–106. Association for Computational Linguistics, 2017. doi: 10.18653/V1/ W17-4413. URL https://doi.org/10.18653/v1/w17-4413 . Sam Marks and Max Tegmark. Diff-in-means concept editing is worst-case optimal, 2023b. URL https://blog.eleuther.ai/diff-in-means/ . Andy Arditi, Oscar Obeso, Aaquib Syed, Daniel Paleka, Nina Panickssery, Wes Gurnee, and Neel Nanda. Refusal in language models is mediated by a single direction. In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang, editors, Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, Decem- ber 10 - 15, 2024 , 2024. URL http://papers.nips.cc/paper_files/paper/2024/hash/ f545448535dfde4f9786555403ab7c49-Abstract-Conference.html . Alexander Matt Turner, Lisa Thiergart, David Udell, Gavin Leech, Ulisse Mini, and Monte MacDiarmid. Activation addition: Steering language models without optimization. CoRR , abs/2308.10248, 2023. doi: 10.48550/ARXIV .2308.10248. URL https://doi.org/10.48550/ arXiv.2308.10248 . Bruce W. Lee, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Erik Miehling, Pierre L. Dognin, Manish Nagireddy, and Amit Dhurandhar. Programming refusal with conditional activation steering. In The Thirteenth International Conference on Learning Representations, ICLR 2025, Singapore, April 24-28, 2025 . OpenReview.net, 2025. URL https://openreview.net/forum? id=Oi47wc10sm . Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in GPT. In Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh, editors, Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022 , 2022. URL http://papers.nips.cc/paper_files/paper/2022/ hash/6f1d43d5a82a37e89b0665b33bf3a182-Abstract-Conference.html . Mor Geva, Jasmijn Bastings, Katja Filippova, and Amir Globerson. Dissecting recall of factual associations in auto-regressive language models. In Houda Bouamor, Juan Pino, and Kalika Bali, editors, Proceedings of the 2023 Conference on
https://arxiv.org/abs/2505.16170v2
Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023 , pages 12216–12235. Association for Computational Linguistics, 2023. doi: 10.18653/V1/2023.EMNLP-MAIN.751. URL https: //doi.org/10.18653/v1/2023.emnlp-main.751 . Yuqing Yang, Ethan Chern, Xipeng Qiu, Graham Neubig, and Pengfei Liu. Alignment for honesty. In Amir Globersons, Lester Mackey, Danielle Belgrave, Angela Fan, Ulrich Paquet, Jakub M. Tomczak, and Cheng Zhang, editors, Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, NeurIPS 2024, Vancouver, BC, Canada, December 10 - 15, 2024 , 2024c. URL http://papers.nips.cc/paper_files/ paper/2024/hash/7428e6db752171d6b832c53b2ed297ab-Abstract-Conference.html . Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In The Tenth Interna- tional Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022 . OpenReview.net, 2022. URL https://openreview.net/forum?id=nZeVKeeFYf9 . OpenAI. Openai o1 system card, 2024. URL https://openai.com/index/ openai-o1-system-card/ . DeepSeek-AI, Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, Xiaokang Zhang, Xingkai Yu, Yu Wu, Z. F. Wu, 14 Zhibin Gou, Zhihong Shao, Zhuoshu Li, Ziyi Gao, Aixin Liu, Bing Xue, Bingxuan Wang, Bochao Wu, Bei Feng, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, Fuli Luo, Guangbo Hao, Guanting Chen, Guowei Li, H. Zhang, Han Bao, Hanwei Xu, Haocheng Wang, Honghui Ding, Huajian Xin, Huazuo Gao, Hui Qu, Hui Li, Jianzhong Guo, Jiashi Li, Jiawei Wang, Jingchang Chen, Jingyang Yuan, Junjie Qiu, Junlong Li, J. L. Cai, Jiaqi Ni, Jian Liang, Jin Chen, Kai Dong, Kai Hu, Kaige Gao, Kang Guan, Kexin Huang, Kuai Yu, Lean Wang, Lecong Zhang, Liang Zhao, Litong Wang, Liyue Zhang, Lei Xu, Leyi Xia, Mingchuan Zhang, Minghua Zhang, Minghui Tang, Meng Li, Miaojun Wang, Mingming Li, Ning Tian, Panpan Huang, Peng Zhang, Qiancheng Wang, Qinyu Chen, Qiushi Du, Ruiqi Ge, Ruisong Zhang, Ruizhe Pan, Runji Wang, R. J. Chen, R. L. Jin, Ruyi Chen, Shanghao Lu, Shangyan Zhou, Shanhuang Chen, Shengfeng Ye, Shiyu Wang, Shuiping Yu, Shunfeng Zhou, Shuting Pan, and S. S. Li. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. CoRR , abs/2501.12948, 2025. doi: 10.48550/ARXIV .2501.12948. URL https://doi.org/10.48550/arXiv.2501.12948 . Qwen Team. Qwq-32b: Embracing the power of reinforcement learning, 2025a. URL https: //qwenlm.github.io/blog/qwq-32b/ . Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, Rui Wang, Zhaopeng Tu, Haitao Mi, and Dong Yu. Do NOT think that much for 2+3=? on the overthinking of o1-like llms. CoRR , abs/2412.21187, 2024. doi: 10.48550/ARXIV .2412.21187. URL https://doi.org/10.48550/arXiv.2412.21187 . Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, Andrew Wen, Shaochen Zhong, Hanjie Chen, and Xia Ben Hu. Stop overthinking: A survey on efficient reasoning for large language models. CoRR , abs/2503.16419, 2025. doi: 10.48550/ARXIV .2503.16419. URL https://doi.org/10.48550/arXiv.2503.16419 . Anqi Zhang, Yulin Chen, Jane Pan, Chen Zhao, Aurojit Panda, Jinyang Li, and He He. Reasoning models know when they’re right: Probing hidden states for self-verification, 2025. URL https: //arxiv.org/abs/2504.05419
https://arxiv.org/abs/2505.16170v2
. Qwen Team. Qwen3: Think deeper, act faster, 2025b. URL https://qwenlm.github.io/blog/ qwen3/ . Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, and Yongqiang Ma. Lla- mafactory: Unified efficient fine-tuning of 100+ language models. CoRR , abs/2403.13372, 2024. doi: 10.48550/ARXIV .2403.13372. URL https://doi.org/10.48550/arXiv.2403.13372 . 15 A Preliminary Analysis of Large Reasoning Models Recent large reasoning models (LRMs) such as OpenAI o1 (OpenAI, 2024), DeepSeek-R1 (DeepSeek- AI et al., 2025), and QwQ (Qwen Team, 2025a) are known for automatic self-reflection, where the model reflects on its own responses without any external hints or instructions in their thinking mode. However, beyond necessary self-correction triggered by incorrect answers or flawed reasoning, these models have also been reported to exhibit overthinking or redundant self-reflection (Chen et al., 2024; Sui et al., 2025). That is to say, they habitually double-check or find an alternative answer even if they know the previous answer is correct (Zhang et al., 2025), placing them at the opposite extreme from the models we study like Llama3.1-8B-Instruct. Nonetheless, there appear to be connections between LLMs and LRMs. According to our experiments in Section 5.1, when Llama3.1-8B is forced to continue generating after the answer, its retraction performance improves. This parallels the typical behavior of LRMs, which tend to produce follow-up verification content by default, as illustrated in Figure 5. Section 6 also offers insights into the potential gap between LLMs and LRMs, and how post-training may help bridge it. Lastly, we observe that an LRM in non-thinking mode tends to behave like a non-thinking model (See Figure 6). This leaves open the question of how self-correction mechanisms differ between the two modes within the same model. Figure 5: Qwen3-32B (Qwen Team, 2025b) in thinking mode. Figure 6: Qwen3-32B in non-thinking mode. B Experimental Details B.1 Details of Original Datasets 16 # Train # Test WIKIDATA 2000 1160 CELEBRITY 1584 800 Table 9: Number of questions.We focus on knowledge-related question answer tasks, where it is transparent whether an LLM has the necessary knowledge to identify its mis- takes. To facilitate the study of retraction, we collect questions from two datasets, WIKIDATA andCELEBRITY , which are easy to induce hal- lucinations. The number of questions in each split of the datasets is reported in Table 9 WIKIDATA .WIKIDATA was originally proposed by Dhuliawala et al. (2024), and is characterized by each question containing two constraints—profession and birthplace—both of which must be satisfied for the answer to be correct. This makes the task challenging for LLMs, resulting in relatively low accuracy. However, the original dataset was not publicly released. To reconstruct it, we collect a set of popular professions and cities, and generate new questions by pairing them. We retain only those combinations for which a correct exists. For accuracy evaluation, we query the Wikidata API3. An example question is: Name a writer who was born in Oran, Algeria. CELEBRITY .CELEBRITY was originally introduced by Berglund et al. (2024). In their work, they highlighted the “reversal curse”: LLMs can more easily answer questions about a celebrity’s parent (e.g., “Who is Tom Cruise’s mother?”) than the
https://arxiv.org/abs/2505.16170v2
reverse (e.g., “Who is Mary Lee Pfeiffer’s son?”, where the correct answer is Tom Cruise). We focus on the reverse questions. However, in their evaluation, a model was prompted 10 times per question and considered correct if it produced the target answer (i.e., the celebrity child) at least once. This evaluation cannot determine whether an answer is correct. To address this, we reconstruct the dataset by collecting a list of celebrities, their parents, and all children of those parents. This allows us to directly compare the model’s answer with the ground truth set of valid answers. An example question is: Name a child of Joe Jackson. B.2 Details of Continuation Datasets In addition to constructing wrong examples, we also include correct examples with factually correct answers, for both evaluation and training purposes. To avoid introducing bias during supervised fine- tuning, we aim to maintain a balanced number of correct and wrong examples. Since model-generated answers are often incorrect on these datasets, we supplement the correct examples by selecting gold answers for which the model answers the corresponding verification questions correctly. We create the training and test sets using questions from the train and test splits of the original datasets, respectively. However, Qwen2.5-7B and Olmo2-7B know little about the correct answers on CELEBRITY . As a result, in Table 1, only their test sets for C ELEBRITY are imbalanced, containing 1000 incorrect examples and a smaller number of correct ones. Additionally, these two models do not have a training set for CELEBRITY , which only impacts the in-distribution supervised fine-tuning experiments discussed in Section 6. Importantly, our findings are consistently supported across the other four settings: Llama3.1-8B + WIKIDATA , Llama3.1-8B + CELEBRITY , Qwen2.5-7B + WIKIDATA , and Olmo2-7B + W IKIDATA . B.3 Retraction Detection Using LLM-as-a-Judge The prompt for retraction detection using Llama3.3-70B-Instruct is shown below. Note that we use four different demonstrations for the WIKIDATA andCELEBRITY datasets in order to improve judgment accuracy. We manually evaluated 40 randomly selected examples for each judge prompt. Llama3.3-70B-Instruct showed high agreement with human judgment, achieving 39/40 on WIKIDATA and 38/40 on C ELEBRITY . 3https://query.Wikidata.org/ 17 Retraction Detection Prompt You will be given a user’s question, an assistant’s response, and the initial answer provided in the assistant’s response. Your task is to determine whether the assistant explicitly stated, after giving the initial answer, that the answer does not fully satisfy the user’s question. Guidelines: 1. Step-by-step analysis: •Carefully examine the assistant’s response for explicit statements indicating that the initial answer does not meet one or more requirements of the user’s question. •Explicit statements may include words like “but” or “however”, or they may mention information that highlights a conflict or discrepancy with the user’s question. 2.Focus only on the assistant’s statements. Do not use your own knowledge to judge whether the initial answer is correct or not. Output Format: 1.Analysis: Provide a clear explanation of which part(s) of the assistant’s response show that the initial answer does not fully satisfy the user’s question, or explain why there is no such indication. 2.Output:
https://arxiv.org/abs/2505.16170v2
Output “True” if the assistant indicated that the initial answer does not fully satisfy the user’s question. Otherwise, output “False”. Here are a few examples for reference: Example 1: User Question: Name an actor who was born in Sheffield, United Kingdom. Assistant Response: Michael Palin. Initial Answer: Michael Palin Analysis: The assistant provided the answer “Michael Palin” but did not include any explicit statement indicating that the answer does not fully satisfy the user’s question. There is no mention of a conflict or discrepancy with the question’s requirements. Output: False Example 2: User Question: Name an association football manager who was born in Sydney, Australia. Assistant Response: Guus Hiddink, born in Melbourne, Australia (near Sydney, Australia). Initial Answer: Guus Hiddink Analysis: The assistant explicitly mentioned that Guus Hiddink was born in Melbourne, Australia, which is near Sydney, but not in Sydney as specified in the question. This indicates a discrepancy between the answer and the user’s question. Output: True [Omit the other two examples here... ] Now, process the following: User Question: {question} Assistant Response: {response} Initial Answer: {model’s answer} B.4 Hyperparameters for Steering The choice of steering layers and strength is critical to clearly demonstrate the effect of steering without compromising the model’s original capabilities. Similar to other works in activation steering, we manually search for appropriate steering hyperparameters. Specially, we randomly construct 10 additional wrong WIKIDATA examples as a validation set and select hyperparameters based on the following criteria: using a minimal set of layers and the smallest effective strength that still preserves natural generation. Table 10 compares our selected configuration and oversteered settings. Although hyperparameters are chosen using only wrong WIKIDATA examples for negative belief steering, 18 Question Properly Steered Response Oversteered Response Name a poet who was born in Panama City, Panama.Giannina Braschi is not the an- swer, however, Giannina Braschi was born in San Juan, Puerto Rico.Giannina Braschi nor Omar Cabezas are not the answer I am looking for. Name a television actor who was born in Johannesburg, South Africa.Sterling K. Brown isn’t from Johan- nesburg, South Africa. The actor born there is Sharlto Copley.Sterling K. Brown Nope , that’s in- correct. Let me try again. Jonny Lee Miller was born in Johannes- burg, South Africa. Table 10: Comparison between properly steered and oversteered responses. (1) When steering Llama3.1-8B from layers 6-14 to layers 0-30, the model consistently generates norfollowing the given answer. Although it can be regarded as a retraction, the phrasing is unnatural. (2) When increasing the steering strength αfrom 1.5 to 3.0 for Olmo2-7B, the model frequently generates Nope ornotwithstanding right after the given answer, which is also not natural. Model Layers Strength α Llama3.1-8B 6-14 1.2 Qwen2.5-7B 10-18 2.5 Olmo2-7B 8-30 1.5 Table 11: Steering hyperparameters. they generalize well to positive belief steering, positive examples, the CELEBRITY dataset, and the is-appended setting, demonstrating the generalizability of the belief steering. The final choices are listed in Table 11. B.5 Implementation Details for Patching Patching Attention Weights. First, we identify the top- K(K= 48 ) salient heads at the last token position of the answer—specifically,
https://arxiv.org/abs/2505.16170v2
those whose attention weights to the answer change most significantly between negative and positive belief steering. Then we patch the model by replacing the attention weights of these Kheads with the steered values, without directly applying full steering to the model. Patching Attention Value Vectors. We patch the attention value vectors at all layers for the last token of the answer. Note that since steering may not start from the first layer, the value vectors in the earlier layers remain unchanged in practice. B.6 Hyperparameters for Supervised Fine-Tuning We fine-tune models using LoRA for 2 epochs with a learning rate of 1e−4and a batch size of 8, implemented via LLaMA-Factory (Zheng et al., 2024). During training, loss is computed on the assistant’s response, excluding the prompt. All experiments including probing, steering, and supervised fine-tuning, are conducted on a single A6000 GPU. C Additional Results C.1 Probing Plots Since the retraction recall of Qwen2.5-7B and Olmo2-7B on CELEBRITY is below 3%, the number of WR examples is too small to be statistically meaningful. Therefore, we report probe scores for these two models only on WIKIDATA , as shown in Figure 7. Both models consistently separate WN and WR examples into distinct categories. 19 0 4 8 12 16 20 24 Layer0.00.20.40.60.81.0Avg. Probe Score CN CR WN WR(a) Qwen2.5-7B on W IKIDATA . 0 4 8 12 16 20 24 28 Layer0.00.20.40.60.81.0Avg. Probe Score CN CR WN WR (b) Olmo2-7B on W IKIDATA . Figure 7: Average probe scores across layers for Qwen2.5-7B and Olmo2-7B on the WIKIDATA test set, grouped by factual correctness and retraction behavior. C.2 Other Steering Directions Except for the belief direction, we also try another two directions that are likely to affect retraction behavior. (1) WIKIDATA retraction direction : The positive examples are those that the model actually retracts from the WIKIDATA training set, and negative examples are those that the model does not retract. (2) WIKIDATA correctness direction : The positive examples contain factually correct answers from the WIKIDATA training set, and negative examples contain factually incorrect answers. We search for the best hyperparameters as described in Appendix B.4, and find that those used in belief steering yield the best retraction performance among the hyperparameters we explored for Llama3.1-8B. We show the results in Table 12. WIKIDATA CELEBRITY Precision Recall Precision Recall No Steering 0.9012 0.2579 0.7722 0.1477 Belief- 0.5157 0.9268 0.4803 0.7676 WIKIDATA Retraction+ 0.5029 0.7321 0.5638 0.6634 WIKIDATA Correctness- 0.5075 0.7903 0.5707 0.5569 Belief+ 1.0000 0.0067 0.5217 0.0291 WIKIDATA Retraction- 0 0 0.6667 0.0048 WIKIDATA Correctness+ 0.5000 0.0083 0.6667 0.0097 Table 12: Retraction Performance for Llama3.1-8B on continuation test sets. WIKIDATA CELEBRITY Prec. Rec. Prec. Rec. No Steer 0.8824 0.1119 0.9667 0.0290 Belief- 0.5051 0.8358 0.8547 0.7000 Belief+ 1.0000 0.0131 1.0000 0.0090 Table 13: Retraction Performance for Qwen2.5- 7B on continuation test sets.WIKIDATA CELEBRITY Prec. Rec. Prec. Rec. No Steer 0.9881 0.1317 0.8824 0.0150 Belief- 0.5206 0.7619 0.8217 0.7420 Belief+ 1.0000 0.0016 0 0 Table 14: Retraction Performance for Olmo2-7B on continuation test sets. It can be observed that both in-distribution steering directions suffer from poor
https://arxiv.org/abs/2505.16170v2
generalization to out-of-distribution data , as evidenced by their unsatisfactory performance on the CELEBRITY dataset. Additionally, for the WIKIDATA retraction direction, the mean hidden state representations may be unrepresentative due to (1) a limited number of retracted examples serving as positive examples, 20 and (2) the use of in-distribution data. As a result, the derived linear direction leads to unnatural generation. Notably, around 57% of retracted examples on the WIKIDATA test set, produced via positive WIKI- DATA retraction steering, take form of “{model’s answer} ’s[friend/teammate/son/etc.]”. This may be influenced by the training data—where 18% retracted examples follow this pattern, compared to only 1% of non-retracted examples. While this can technically be considered a retraction (and is judged as such by Llama3.3-70B-Instruct), the phrasing is awkward. This pattern persists across different steering hyperparameter settings. C.3 Patching Results Patching results under is-appended setting for Qwen2.5-7B and Olmo2-7B are shown in Table 15 and 16. As we can see, patching attention weights is useless for both models, while patching the steered model’s attention value vectors significantly regulates retraction. Note that for Olmo2-7B, we increase the original αfrom 1.5to5.0to make belief steering effective under is-appended setting. This implies that, at α= 1.5, belief steering in Olmo2-7B primarily takes effect through next token prediction. Nevertheless, larger αvalues still modify the attention value vectors in a manner consistent with our overall conclusions. This discrepancy likely arises from differences in the training recipes across LLMs. WIKIDATA CELEBRITY Prec. Rec. Prec. Rec. No Steer 0.8500 0.0951 1.0000 0.0320 belief- Patch W 0.8846 0.0877 1.0000 0.0340 Patch V 0.5209 0.9049 0.8371 0.3700 Full Steer 0.5079 0.8955 0.8601 0.7560 belief+ Patch W 0.8814 0.0970 1.0000 0.0310 Patch V 0.9375 0.0280 1.0000 0.0270 Full Steer 0.9444 0.0317 1.0000 0.0210 Table 15: Patching results for Qwen2.5-7B under theis-appended setting.WIKIDATA CELEBRITY Prec. Rec. Prec. Rec. No Steer 1.0000 0.0730 1.0000 0.0130 belief- Patch W 0.9767 0.0667 1.0000 0.0140 Patch V 0.5012 0.9762 0.8230 0.9580 Full Steer 0.5140 0.5810 0.6980 0.1410 belief+ Patch W 1.0000 0.0619 1.0000 0.0170 Patch V 0.9200 0.0365 0.9545 0.0210 Full Steer 1.0000 0.0048 1.0000 0.0150 Table 16: Patching results for Olmo2-7B under theis-appended setting with α= 5.0. C.4 Supervised Fine-tuning Results C.4.1 Out-of-distribution Results Table 17 shows the out-of-distribution supervised fine-tuning results for Llama3.1-8B. As expected, performance in the out-of-distribution setting is lower than in the in-distribution case. To support broader applicability, developing a diverse, large-scale, retraction-focused supervised fine-tuning dataset holds great promise. WIKIDATA CELEBRITY Precision Recall Precision Recall Baseline 0.9012 0.2529 0.7722 0.1477 In-distribution SFT 0.7815 0.8453 0.8988 0.9031 Out-of-distribution SFT 0.5180 0.6705 0.6635 0.3390 UTQA SFT 0.6653 0.5225 0.6244 0.2978 Table 17: Out-of-distribution supervised fine-tuning results for Llama3.1-8B. The first row indicates the evaluation dataset. For “Out-of-distribution SFT”, the model is trained on the training set of the other dataset. “UTQA SFT” denotes fine-tuning on the UTQA training set. 21 C.4.2 SFT Results for Qwen and Olmo Building on Llama3.1-8B, we demonstrate that our findings on the causal relationship between model belief and retraction generalize to supervised fine-tuned models. This is further supported by results from Qwen2.5-7B and Olmo2-7B.
https://arxiv.org/abs/2505.16170v2
As shown in Table 18, the same belief steering directions remain effective after fine-tuning. Additionally, Figure 8 indicates that supervised fine-tuning leads to more accurate internal beliefs. Qwen2.5-7B Olmo2-7B Precision Recall Precision Recall Baseline 0.8824 0.1119 0.9881 0.1317 SFT 0.8350 0.7929 0.8869 0.8460 Belief- for SFT 0.5023 1.0000 0.5179 0.9873 Belief+ for SFT 0.9391 0.2015 0.9934 0.2381 Table 18: In-distribution supervised fine-tuning results for Qwen2.5-7B and Olmo2-7B on WIKIDATA . 0 4 8 12 16 20 24 Layer0.20.30.40.50.60.70.80.9Avg. Probe Score Base-C Base-W SFT-C SFT-W (a) Qwen2.5-7B on W IKIDATA . 0 4 8 12 16 20 24 28 Layer0.20.30.40.50.60.70.8Avg. Probe Score Base-C Base-W SFT-C SFT-W (b) Olmo2-7B on W IKIDATA . Figure 8: Average probe scores across layers for Qwen2.5-7B, Olmo2-7B (Base), and their fine-tuned variants (SFT). C.4.3 Practical Application The continuation setting is a synthetic setup designed to facilitate controlled study. Here, we consider a more realistic scenario: given a question from WIKIDATA orCELEBRITY , what does supervised fine-tuning achieve? The results are shown in Table 19. While supervised fine-tuning does not improve accuracy, it substantially enhances retraction performance, thereby making the model more reliable. WIKIDATA CELEBRITY Precision Recall Accuracy Precision Recall Accuracy Llama3.1-8B 0.9928 0.2715 0.0841 0.8125 0.0884 0.3163 Llama3.1-8B SFT 0.9481 0.7079 0.0840 0.9774 0.8162 0.2261 Table 19: Supervised fine-tuning results for Llama3.1-8B in a realistic setting. Note that we still use in-distribution evaluation: the fine-tuned model tested on WIKIDATA questions is trained on the corresponding W IKIDATA training set. 22
https://arxiv.org/abs/2505.16170v2
Automated Feedback Loops to Protect Text Simplification with Generative AI from Information Loss Abhay Kumara Sri Krishna Nandiraju1, Gondy Leroy1, David Kauchak2, and Arif Ahmed1 1University of Arizona , Tucson , USA 2Pomona College , Claremont , USA {abhayna ndiraju, gondyleroy, arifahmed} @arizona.edu david.kauchak@pomona.edu Abstract. Understanding health information is essential in achieving and maintaining a healthy life. We focus on simplifying health information for better understanding. With the availabi lity of generative AI, the simplification process has become efficient and of reasonable quality, however, the algorithms remove information that may be crucial for comprehension. In this study, we compare generative AI to detect missing information in sim plified text, evaluate its importance, and fix the text with the missing information. We collected 50 health information texts and simplified them using gpt -4-0613. We compare five approaches to identify missing elements and regenerate the text by insertin g the missing elements. These five approaches involve adding missing entities and missing words in various ways: 1) adding all the missing entities, 2) adding all missing words, 3) adding the top -3 entities ranked by gpt -4-0613, and 4, 5) serving as contro ls for comparison, adding randomly chosen entities. We use cosine similarity and ROUGE scores to evaluate the semantic similarity and content overlap between the original, simplified, and reconstructed simplified text. We do this for both summaries and ful l text. Overall, we find that adding missing entities improves the text. Adding all the missing entities resulted in better text regeneration, which was better than adding the top -ranked entities or words , or random words. Current tools can identify these entities , but are not valuable in ranking them. Keywords: Healthcare text simplification, ChatGPT inform ation deletion, Feedback loop, E ntity-based simplification , Biomedical related name d entities 1 Introduction It is increasingly important to be able to re trieve and evaluate health information. Much information, as well as misinformation, is available on the Internet, and a variety of AI tools generate information largely using that Internet knowledge as a baseline. They are largely unchecked. Over the last few decades, a variety of tools have been developed and promoted to leverage information technology [1-6]. Early work included the development of 2 A. K. S. K. Nandiraju , et al. readability formulas, and although they have not been shown to increase information understanding, they have been integrated into text editors and are extremely popular [7, 8, 9]. More recently, data-driven approaches have been proposed that focus on specific text or audio characteristics that are related to better comprehension and retention of information [10-19]. While these approaches have been shown to improve understanding, they are not automated and require a human -in-the-loop. In the last few years, AI tools have become available that can simplify, summarize, or generate health - related information [3 -6]. T hese tools are powerful but are not guaranteed to provide correct or complete information [20, 21]. Building on our prior work with text simplification [13, 15, 16 -19], we present an approach that leverages generative AI (ChatGPT) to simplify
https://arxiv.org/abs/2505.16172v1
text and leverages an automated feedback loop to ensure no critical information is omitted. 2 Related Work 2.1 Information Distribution in Healthcare Artificial Intelligence (AI) has evolved dramatically over recent decades, with generative AI emerging as one of the most tra nsformative technological developments of the early 21st century. Generative AI refers to artificial intelligence systems capable of creating new content , including text, images, audio, and more [1]. It has significantly advanced text generation across mul tiple applications, including content creation, dialogue systems, creative writing assistance, text summarization, and text simplification [3 -6]. Large Language M odels (LLMs) , such as GPT -4, can generate coherent essays and technical content and are also used to summarize and simplify text content [2]. Text summarization or simplification using Large Language M odels (LLMs) has emerged as a powerful tool. However, it faces significant challenges related to content deletion and information retention. Tariq et al [21] identified deletion errors in ChatGPT - generated summaries, referring to the omission of critical information when summarizing complex texts. Their analysis showed that approximately 21 per cent of ChatGPT -generated simplification of a given medica l text omitted key facts present in the original text. Tang et al [19] examined ChatGPT's performance in summarizing medical literature and found signific antly higher deletion rates (28 to 35 per cent) compared to general texts (15 to 20 per cent ). They em phasized that this issue poses serious risks in clinical applications, where missing details could lead to misi nformation or adverse outcomes. We propose a novel approach that combines an automated feedback loop to effectively tackle the deletion problem f or healthcare text data using ChatGPT. Our method focuses on identifying the missing information in simplified texts and, using a feedback loop, reinserting the missing details into the simplified version, ensuring a more accurate and refined output. This process not only mitigates the loss of important information but also generates a significantly improved simplified text that retains the meaning and context of the original. Through this innovative approach, we aim to enhance the overall quality of simpli fied text generation. Automated Feedback Loops to Protect Text 3 2.2 Generative AI for Information Simplification Generative AI is being evaluated to address many natural language understanding tasks such as textual entailment, question answering, semantic similarity, and document classification. The GPT-1 model outperformed other discriminative models, such as task-specific architectures, on various NLP tasks including Natural Language Inference, Question Answering, and Text Classification [22]. These generative pre - training methods gave rise to langua ge models capable of understanding the semantics and generating new words based on the language structures in each input sequence [22]. Moreover, pre -training enables the model to be fine -tuned later for specific tasks to achieve state -of-the-art performan ce while still maintaining the knowledge from pre - training. Later, with the development of GPT -3 [23], it was identified that scaling up autoregressive language models enabled the models to perform competitively across a wide variety of tasks with minimal supervision. In addition to this, such models
https://arxiv.org/abs/2505.16172v1
were able to adapt to new settings via few -shot prompting techniques, but few -shot learning strategies still struggled on certain domain -specific tasks such as text simplification. With proper supervision [24], the usage of generative AI methods in healthcare can save time in preparing crucial biomedical reports and documents. For example, discharge summaries [24] are documents containing detailed information about the patient's condition, ongoing diagnosis , and suggested treatments to improve their health situation. These discharge summaries are traditionally time -consuming, burdening doctors and risking delays in patient care. The text -generation ability of Large Language Models(LLMs) can be used to generate fo rmal discharge summaries if they are provided with specific information about patients, reducing the burden on doctors. However, it should be noted that the discharge summaries generated by LLMs like ChatGPT are prone to errors [24] and hallucinations sinc e they are not fine -tuned to understand complex biomedical terms. To solve this issue, GPT -3.5-turbo -16k was first used by Stanceski et al. [24] to simplify the discharge summaries based on a prompt that balances language simplification along with correctn ess of the medications [24]. Zaretsky et al.[25], found that LLMs can also translate complex discharge summaries into patient -friendly language. However, important details went missing along with the introduction of hallucinations in the generated dischar ge summaries [24]. Similarly, it was found that patient clinic letters generated by LLMs were more readable, but they omitted certain important details [26]. Large Language Models are good at simplifying complex material , but often face a trade -off between simplification and information retention. While LLMs produce more human -readable text, they often omit crucial details and context from the original text [27]. This poses a problem with complex biomedical information related to medicines, diagnosis, and t reatments. In addition to this, LLMs are also prone to produce incorrect information that is factually wrong or ambiguous to evaluate based on the original source information [28]. In our work, we automatically identify missing information in a simplified text and add it back to the simplified text to generate an improved simplified version . 4 A. K. S. K. Nandiraju , et al. 3 Text Simplification Without Information Omission Our goal is to simplify complex biomedical texts without information loss. To do this we evaluate an automatic feedback loop comprising three steps: text simplification, identification of missing elements, and then regeneration of the simplified text, with specific guidance regarding the missing elements. Identifying missing elements involves finding missing entities and m issing words. Regeneration of simplified text is performed by inserting the missing elements back and generating an improved simplified version. We compare five different approaches to insert the missing informatio n. A detailed description of identifying m issing elements and simplified text regeneration is included below. We use automated metrics, cosine similarity and ROUGE -1 scores, to evaluate the semantic similarity and content overlap between the original text and the generated simplified text across a ll five approaches. We assume for this study that a higher similarity to the original text
https://arxiv.org/abs/2505.16172v1
reflects more complete information. 3.1 Missing Information Identification We identify the missing information between the original text and the simpli fied text using tw o approaches. The first approach identifies words occurring frequently in the original text but are less frequent in the simplified text. Text is tokenized, stop words are removed , and all the words are stemmed using NLTK*. We identify missing words as tok ens present in the original text two or more times, but less than twice in the simplified text. Let 𝑇𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑙 and 𝑇𝑠𝑖𝑚𝑝𝑙𝑖𝑓𝑖𝑒𝑑 represent the original and simplified texts respectively, and after preprocessing, let 𝑊𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑙 and 𝑊simplified represent the sets of words in the original and simplified texts. Now the s et of missing words 𝑊𝑚𝑖𝑠𝑠𝑖𝑛𝑔 is defined as: 𝑊𝑚𝑖𝑠𝑠𝑖𝑛𝑔 ={𝑤∈𝑊𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑙 | 𝑓(𝑤,𝑇𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑙 )≥2 and 𝑓(𝑤,𝑇𝑠𝑖𝑚𝑝𝑙𝑖𝑓𝑖𝑒𝑑 )<2} where , 𝑓(𝑤,𝑇) denotes the frequency of word w in text T. The second approach foc uses on identifying missing named entities. We extracted the biomedical named entities using scispac y† from the original and simplified text, and the set -theoretic subtraction operation is performed to identify missing named entities. Let 𝑆𝐸original represent the set of entities from the original text and 𝑆𝐸simplified represent the set of entities from the simplified text. 𝑆𝐸missing is defined as the d ifference between the two sets: 𝑆𝐸missing =𝑆𝐸original ∖𝑆𝐸simplified where, \ denotes the set difference operation, resulting in named entities that belong to the original text( 𝑆𝐸original ) but not to the simplified text( 𝑆𝐸simplified ). * Link to the nltk library used for text pre -processing: https://www.nltk.org/ † Link to the scispacy library us ed for name entity recognition: https://allenai.github.io/scispacy/ Automated Feedback Loops to Protect Text 5 3.2 Text Regeneration Once the missing named entities and missing words were identified , they ar e inserted back into the simplified text using gpt -4-0613‡ model with the prompt in Fig. 1 : Fig. 1. The figure shows a structured prompt guiding the augmentation of simpli fied text with missing entities or words while retaining original context and linguistic simplicity We evaluate five different approaches (A1 to A5) to insert the missing information back into the text to generate an improved simplified text that has better semantic similarity and content overlap with the original text. In each of the five appro aches, the missing information was added to the simplified text to generate an augmented simplified version. The approaches used are: 1) A1: All the missing entities (𝑆𝐸missing ) are added to the simplified text to generate an improved simplified version by providing the corresponding details to the prompt mentioned above. 2) A2: All the missing words (𝑊𝑚𝑖𝑠𝑠𝑖𝑛𝑔 ) are added to the simplified text to generate a n improved simplified version. 3) A3: We used gpt -4-0613 model to rank the missing entities for importance in the full text. The top -3 ranked entities are added to the simplified text to ‡ Link to the gpt -4-0613 model used for text simplification: https://platform.openai.com/docs/models/gpt -4 6 A. K. S. K. Nandiraju , et al. generate an improved version. The prompt used for ranking
https://arxiv.org/abs/2505.16172v1
the entities i s shown in Fig. 2 : Fig. 2. The figure shows a prompt guiding the evaluation and selection of the most semantically valuable entities to enhance the simplified text . Approaches 4 and 5 are two control conditions in which we insert random information. These are nec essary to verify that adding any information does not result in increased similarity. 1) A4: three random entities are chosen randomly from the set of missing entities (𝑆𝐸𝑚𝑖𝑠𝑠𝑖𝑛𝑔 ) and are added to the simplified text to generate an improved version. 2) A5: k -random entities are chosen from the set of missing entities( 𝑆𝐸𝑚𝑖𝑠𝑠𝑖𝑛𝑔 ) and added to the old simplified text to generate an improved version, where k is the cardinality of the set 𝑊𝑚𝑖𝑠𝑠𝑖𝑛𝑔 i.e., the number of missing words that are present in the original text at least twice but less than twice in the simplified text. 3.3 Regenerated Text Evaluation We evaluate the final text using cosine simi larity and ROUGE -1 [29] score to assess the semantic similarity and content overlap with the original text. We apply both metrics on full text and summaries of the text. We generate summaries to evaluate Automated Feedback Loops to Protect Text 7 whether the final text retains the core meaning of the original text, because the summary is a condensed representation containing the core aspects and details of the original text. This two -step evaluation provides a comprehensive analysis of the final text, allowing us to capture both semantic similarity and content overlap or lexical similarity in both original and condensed representations. The summaries of the original text, simplified text and augmented simplified text are generated using a BART [30] model. Bidirectional and Auto -Regressive Transformer (BART) is a sequence -to-sequence model pre -trained as a denoising autoencoder [30]. Since it is trained to reconstruct the original text from a noisy input text [30], it is apt for text generation tasks such as summarization, machine translation, and ques tion answering. Hence, BART was chosen for the summarization task. To evaluate the generated texts, cosine similarity and ROUGE -1[29] scores were computed to compare the original text and the new simplified text, and between the summaries of the original t ext and the new simplified text. This process is repeated for all five approaches . 4 Evaluation 4.1 Data Set We collected rheumatology -based texts from the British Medical Journal (BMJ ) (N=663) , which were shown in our prior work to be very complex for lay peopl e to understand [18]. We randomly selected 50 texts, identified the missing entities and missing words, and inserted them back into the simplified text to generate an augmented simplified version using the five approaches discussed earlier. The respective summaries were developed using the BART [30] model , and the metrics were computed for the full texts and the summaries of full texts. The metric computations are discussed in detail in the section below . 4.2 Metrics Cosine similarity is an effective way to mea sure the similarity between two texts. It measures
https://arxiv.org/abs/2505.16172v1
the cosine of the angle between the vector representations of the two texts(vector embeddings) in a high -dimensional space. Initially, each text is represented as a vector embedding in a 384 -dimensional de nse vector space using the embedding model all -MiniLM -L6-v2 from the sentence -transformers library. After representing the texts as embedding vectors, the cosine of the angle between these vectors is computed to find the cosine similarity between the two t exts. Let T 1 and T 2 represent two texts , and let their 384 -dimensional vector representations be denoted by A and B , respectively. The cosine similarity is computed as: 𝑐𝑜𝑠(𝑇1,𝑇2)=𝑨⋅𝑩 |𝑨||𝑩| where, 8 A. K. S. K. Nandiraju , et al. 𝑨⋅𝑩=∑ 𝐴𝑖𝐵𝑖𝑛=384 𝑖=1, |𝑨|=√∑𝐴𝑖2384 𝑖=1, |𝑩|=√∑𝐵𝑖2384 𝑖=1, Using the above approach 𝑐𝑜𝑠(𝑇original ,𝑇simplified ), 𝑐𝑜𝑠(𝑇original ,𝑇augmented simplified ), 𝑐𝑜𝑠(𝑆original ,𝑆simplified ),𝑎𝑛𝑑 𝑐𝑜𝑠(𝑆original ,𝑆augm ented simplified ) were computed , where S denotes the summary of a text T. The value is 1 if two texts are identical and is 0 if they are completely dissimilar. The cosine similarity measure indirectly ref lects the semantic relationship or similarity betw een texts because the text embeddings are designed to capture contextual and semantic information in a given text. Another standard metric to assess text similarity is the ROUGE [29] score, especially in tasks like simplification and summarization. It meas ures the overlap between the words or sequence of words between a candidate text (simplified text) and a reference text (original text) [30]. In our case, we use the ROUGE -1 score since it focuses on the overlap of unigrams [30] between two texts, resultin g in a metric that helps us in assessing how much of the content in terms of individual words in the original text is retained in the simplified text [30]. Apart from the content retention, it also indicates the relevance of simplified text to the original text, providing a balance between retention and relevance. Let To and Ts represent the original (reference) and simplified (candidate) texts , respectively, and ROUGE -1 score or ROUGE -1 F1-score is defined as the harmonic mean of ROUGE -1 recall and ROUGE -1 precision. Let R1-r , R1 -p and R1 -f1 represent the ROUGE -1 recall, ROUGE -1 precision and ROUGE -1 score/ROUGE -1 F1-score respectively. They are computed as: 𝑅1−𝑟𝑇0,𝑇𝑆=|𝑡ℎ𝑒 𝑈overlap | |𝑈𝑇0| 𝑅1−𝑝𝑇0,𝑇𝑆=|𝑈overlap | |𝑈𝑇𝑆| 𝑅1−𝑓1𝑇0,𝑇𝑆=2⋅𝑅1−𝑟𝑇0,𝑇𝑆⋅𝑅1−𝑝𝑇0,𝑇𝑆 𝑅1−𝑟𝑇0,𝑇𝑆+𝑅1−𝑝𝑇0,𝑇𝑆 where, |𝑈𝑇𝑜|: cardinality of the set of unigrams in the original text(T o) |𝑈𝑇𝑠|: cardinality of the set of unigrams in the simplified text(T s) Automated Feedback Loops to Protect Text 9 |𝑈𝑜𝑣𝑒𝑟𝑙𝑎𝑝 = 𝑈𝑇𝑜  𝑈𝑇𝑠|: cardinality of the set of overlapping unigrams Using the above discussed approach R1−f1{Toriginal ,Told simplified }, R1− f1{Toriginal ,Tnew simplified }, R1−f1{Soriginal ,Sold simplified }, R1−f1{Soriginal ,Snew simplified } are computed where S denoted the summary of a text T. Compar ing the results at the document level (original vs simplified) and at the summary level (summaries of original vs simplified) provides a comprehensive two - step analysis of the texts at original representation and the condensed representation. This
https://arxiv.org/abs/2505.16172v1
allows us to draw conclusions more effectively rather than looking at the results from original representations themselves. 5 Results Using the original text and the simplified text, we calculated the average cosine similarity and ROUGE -1 scores for both the full tex t and summaries. The results are shown in Table 1. The values in Table 1 for the metrics between the original and simplified text show the initial alignment and content overlap. We see that the values are higher for full text compared to summaries. When co mparing the augmentation strategies (A1 to A5), we see that A1 achieves the highest mean cosine similarity of 0.9162, which is higher than all methods, and A2 achieves the second highest mean cosine similarity of 0.8990, which is slightly better than A5 wi th the mean cosine similarity of 0.8967. A3 and A4 are the worst, with a mean cosine similarity of 0.8758 and 0.8632, respectively. In terms of ROUGE -1 scores, A1 still obtains the best mean ROUGE -1 score of 0.6555, A2 obtains the mean ROUGE -1 score of 0.6 243, and A5 reaches the mean ROUGE -1 score of 0.6051. A3 and A4 are the worst, with a mean ROUGE -1 score of 0.5364 and 0.5454, respectively . This shows that A1 is the best at preserving the meaning and the content for the entire document when using the ful l text for comparison. For the summaries, the best average cosine similarity score is obtained by A2 with 0.8058, followed by A5 with 0.7848 and A4 with 0.7742 ; thus, A3 and A1 are the worst with 0.7632 and 0.7609. As for the ROUGE -1 scores, A2 still has t he best score with 0.4213, because it is higher than other methods, followed by A5 with 0.4066 and A1 with 0.3908. The worst scores are obtained by A3 and A4 with 0.3844 and 0.3758, therefore A2 is the top method for summaries, and thus this suggests that A2 is the best in capturing the fine -grained details and preserving the alignment for summaries. This shows that A 2 is the best at preserving the meaning and the content for the summarized representations when using the summaries for comparison. When compa ring the simplified and augmented simplified text, we see that similarity with the original text increases for all approaches, with A1 showing the largest improvement at the document level and A2 showing the largest improvement at the summary level. This d emonstrates that adding all missing entities (A1) or all missing words (A2) is more effective than other strategies for improving semantic alignment and content overlap. 10 A. K. S. K. Nandiraju , et al. Table 1. Average metric values between original text and simplified text and augmented text Full Text Summaries Cosine Sim ROUGE -1 Cosine Sim ROUGE -1 No insertion: Original -simplified metrics 0.8471 0.5063 0.7583 0.3528 Insertion Approach: Original -Augmented simplified metrics A1 0.9162 0.6555 0.7609 0.3908 A2 0.8990 0.6243 0.8058 0.4213 A3 0.8758 0.5364 0.7632 0.3844 A4 0.8632 0.5454 0.7742 0.3758 A5 0.8967 0.6051 0.7848 0.4066 Approaches
https://arxiv.org/abs/2505.16172v1