text string | source string |
|---|---|
LLM from English ASR-labeled data (8k public1and 12k in-house). This data is then synthesized using the same TTS, denoted as ASR-QA . 1A subset from the NeMo ASR set in [21]. The resultant user speech contains both TTS and real data. An evaluation set used in Sec. 5.2 is created from the public data portion. We use a fixed speaker to generate agent speech and randomly select speakers for user speech. We create duplex training data from the aforementioned user-agent QA pairs. First, we split a pair of utterances into two streams, corresponding to the user and agent portions sep- arately, and then insert silence into the agent stream when the user speaks, and vice versa. This gives us two streams of speech (shown as the first turn in Fig.2). This duplex structure enables the model to listen and speak simultaneously at any time. To prevent the agent from barging in unexpectedly, we insert a 0.64s silence between user and agent before the agent speaks. This duration is chosen to balance modeling of user pauses with low turn-taking latency. 4.2. Augment with Multi-turn and Barge-in In order for the model to learn the ability for multi-turn con- versation, we also create duplex data that includes two or more turns of conversation between the user and agent (e.g., Fig. 2). First, we synthesize 3k hours of duplex data from a text-based multi-turn Internal SFT dataset to form multi-turn spoken QA. To ensure a more conversational flow, we limit each turn of the text SFT data, which is typically very long, to under 25 seconds. Second, we augment the single-turn data from Sec. 4.1 by ran- domly concatenating two QA pairs from the same dataset. The multi-turn data topics focus on role-playing, daily topics, scien- tific topics, etc. Moreover, when creating multi-turn data, we allow the user to barge in by cutting off the agent speech. After the cutoff, we keep a small duration (0.64 s) of agent speech to account for barge-in latency, and pad the rest of the agent turn with silence. As we show in later results, this straightforward approach enables the model to learn barge-in behavior. 4.3. Conversational data To enhance the model’s conversation ability on daily topics, we create Topic andUltraChat datasets (totaling 3.3k hours as shown in Table 1). For both datasets, we first generate 4-turn text-based conversations and then synthesize them using a TTS model [26]. For Topic , we randomly choose a topic between user and agent and prompt the Meta-Llama-3.1-70B-Instruct model [4] to generate a conversation. The topics are randomly chosen from the everyday-conversation dataset [30], which cov- ers 63 everyday and science topics. To generate concise replies for efficient training, we restrict the words of each turn to be 30 words in the prompt. The generated conversations are then synthesized into speech and prepared to the duplex data format. ForUltraChat , we randomly sample a chat conversation from theUltraChat dataset [31] to use as contextual information in the prompt to generate a 4-turn conversation similar to Topic . 5. Experiment Details 5.1. Training Details | https://arxiv.org/abs/2505.15670v1 |
We implement the model with PyTorch using the NeMo Toolkit [32], and the model is trained on 32 A100 (80G) GPUs with a batch duration of 1000 sec per GPU. The speech encoder is ini- tialized from a 100M streaming pretrained encoder with 80ms right context [21], and the LLM is initialized from the 1.1B TinyLlama [22]. We use a 32k SentencePiece tokenizer for text, and a personalized 0.6 kbps NanoCodec [23] for speech by de- fault. Ablations for personalization are presented in Sec. 6.3.The speech codes have 4 channels, with a vocabulary size of 4,037 for each channel. Text and speech channel training loss are weighted by 3 and 1 respectively. We use FusedAdam, and an inverse Square Root Annealing learning rate (LR) schedule for optimization. The LR schedule starts with an initial learning rate of 3e-4 with a warm-up of 2500 steps. Gradient clipping is applied at the threshold of 1.0 to stabilize training. 5.2. Evaluation Data and Metrics Our evaluation data consists of: 1) multi-turn conversations: Ul- traChat ,Roleplay (part of Internal SFT ), and Topic , and 2) spo- ken QA reasoning :ASR-QA andAlpaca . We select one shard for each dataset in Sec. 4, which is unseen during training, for this evaluation. To evaluate model performance on a more chal- lenging scenario where the user frequently interrupts the agent, we create an evaluation set called Impatient . When creating Im- patient , we halve the silence time between the current and the next user turn (from the original duration in the ASR-QA set) to increase the chance of the agent being interrupted by the user. By doing this, the interruption cases for our model and Moshi (more details in Sec. 6.1) in the Impatient dataset are as high as 95.4% and 96.7%, respectively. In terms of evaluation metrics, we evaluate the reason- ing ability of our model using GPT scores generated by gpt-4o-mini-2024-07-18 ranging from 0 to 10 based on the hypotheses and references of all the agent turns. The reason- ing quality is evaluated using the aforementioned multi-turn andspoken QA reasoning datasets. The hypotheses of agent turns are produced by transcribing the generated speech using the ASR model nvidia/parakeet-tdt ctc-110m . We evaluate turn-taking ability and speech generation qual- ity using the UltraChat andImpatient datasets. We use two types of metrics to measure the turn-taking ability: barge-in per- formance and 1st response latency (see Table 2). For barge-in performance, we introduce the following metrics: 1) Barge-in latency: The time delay between the user’s speech onset and the agent stopping its response; 2) Success rate: The percent- age of cases where the agent successfully stops speaking within 1.5 seconds after the user interruption; and 3) False alarm rate: The frequency at which the agent incorrectly barges in while the user speaks. Additionally, if the user stops speaking within 0.1s, the event is not counted as a false alarm, as we found that Moshi tends to proactively respond. The 1st response latency is defined as the time taken by the agent to respond to the 1st user turn. | https://arxiv.org/abs/2505.15670v1 |
To evaluate the speech quality, we compute the UTMOS [33] using the generated agent speech after removing silence. 6. Results and Comparison 6.1. Conversation and Speech Generation Quality We first evaluate the turn-taking and speech generation quality of our model in Table 2. Compared to Moshi, our model has significantly higher barge-in success rate (94.5% v.s. 55.1%), the same false alarm rates, and lower barge-in latency (0.69s v.s. 0.81s). We observe that, in multi-turn conversations, Moshi often initiates dialogue more proactively, leading to user barge- in failures for both UltraChat andImpatient . We cannot directly compare our 1st response latency with Moshi’s as Moshi almost always responds before the user fin- ishes talking and thus does not fit for this metric. We also note that our 1st response latency is affected by our data generation, as we always add a 0.64-second silence after the user turns to ensure no unexpected agent barge-in. Lastly, we report UTMOS Table 2: Comparison of turn-taking and speech generation quality. Dataset ModelBarge-in Performance 1st Response UT Success ↑ False Alarms ↓ Latency (s) ↓ Latency (s) ↓ MOS ↑ UltraChatOurs 83.0% 0.0% 0.52 0.72 4.3 Moshi 56.0% 0.0% 0.63 n/a 3.9 ImpatientOurs 94.5% 0.0% 0.69 0.92 4.0 Moshi 55.1% 0.0% 0.81 n/a 3.8 Table 3: Reasoning quality of multi-turn conversation and spo- ken QA. GT+LLM denotes an optimal cascaded system which feeds every ground-truth user turn to the LLM. GPT ScoreMultiturn Conversation Spoken QA UltraChat Roleplay Topic ASR-QA Alpaca Ours 3.5 4.6 6.1 7.8 2.9 Moshi 3.4 1.7 2.8 1.9 1.7 GT+LLM 6.4 4.9 5.5 5.8 5.0 Table 4: Evaluation of audio reconstruction and the resultant S2S quality across different codecs. CodecBitrate Audio Reconstruction S2S kbps MOS↑CER↓SECS↑ ASR-BLEU ↑ Mimi[14] 1.1 4.16 3.00 0.65 n/a Nano[23] 1.2 4.67 1.44 0.77 18.1 Nano[23] 0.6 4.54 3.55 0.57 16.2 +personalized 0.6 4.75 1.36 0.94 18.7 and our model generates better quality speech than Moshi by up to 0.4. 6.2. Reasoning Quality In Table 3, we compare the reasoning ability of our model to Moshi [14] and an optimal cascaded system formed by feeding every ground-truth user turn text to LLM (i.e., GT+LLM in Ta- ble 3). The backbone of our model, TinyLlama, is used as the LLM. We report the aforementioned GPT scores on two types of test sets: multi-turn conversation andspoken QA . Compared to Moshi, our model shows better scores on all datasets despite the fact that our model uses much less data and smaller backbone. Compared to the optimal cascaded system, our model shows competitive results, better on two and worse on three sets. The slightly worse performance of end-to-end versus cascaded is not new and has been shown by other research [2, 11, 14, 29]. Fu- ture works include i) a more fair comparison with full pipeline (V AD, streaming ASR and TTS, LLM), and ii) improving the reasoning of duplex S2S models. 6.3. Speech Codec Personalization We personalize the codec to our agent voice by fine-tuning the codec on 21k ground-truth utterances from the target speaker. The model is evaluated on 228 test samples | https://arxiv.org/abs/2505.15670v1 |
that are not seen during training. Perceptual quality is assessed using estimated Mean Opinion Scores (MOS) with Torchaudio-Squim [34]. In- telligibility is measured by computing the Character Error Rate (CER), comparing transcriptions from the Massively Multilin- gual Speech (MMS) model [35] for both ground-truth and re- constructed audio. Speaker similarity is evaluated using the Speaker Encoder Cosine Similarity (SECS) [36], computed with the state-of-the-art ECAPA2 speaker encoder [37]. Table 4 presents the evaluation results for the 1.1 kbps Mimi Codec [14], 1.2 kbps, and 0.6 kbps versions of NanoCodec [23],and the proposed personalized version of 0.6 kbps NanoCodec. Personalization significantly enhances the performance of the 0.6 kbps NanoCodec. Notably, despite operating at nearly half the bitrate, our personalized codec outperforms both Mimi and NanoCodec at 1.2 kbps across all audio reconstruction metrics on the target speaker. As an ablation study, we further train our duplex S2S mod- els with different codecs (last three rows in Table 4). For sim- plicity, we report ASR-BLEU, which is calculated based on the reference agent texts and ASR transcripts of generated agent speech. Results on ASR-QA in Table 4 indicate that personal- ization enhances duplex modeling as well, leading to improved perceptual quality and higher BLEU scores. 6.4. Listening Duplex Conversation Examples We include representative listening examples in an anonymous demo page2. Specifically, the following capabilities of our du- plex S2S model on unseen data are highlighted: Robustness with frequent interruption . In the example of Fig. 3 and the webpage, the user interrupts the agent three times in 15 seconds, and leaves limited time for the agent to respond. Despite these challenges, the agent still demonstrates robust conversational behavior in handling frequent barge-in. Figure 3: Multi-turn conversation with frequent barge-in. Unseen reasoning problem . Beyond leveraging learned knowledge to generate responses, the agent also demonstrates the ability to utilize contextual information, effectively summa- rizing the main topic of each conversation in Fig. 4 and webpage that was unseen during training. Figure 4: Spoken QA example on an unseen topic. 7. Conclusion We introduced a novel duplex S2S architecture that models si- multaneous user and agent streams without requiring speech pretraining. Our data-efficient approach maintains end-to-end modeling of conversation reasoning and behaviors. Experimen- tal results show competitive performance in reasoning, barge-in, and turn-taking. Our open-sourced training and inference code will also be a valuable resource for future research. 2https://anonymous598e.github.io/INTERSPEECH2025-DEMO/ 8. References [1] T. B. Brown, “Language models are few-shot learners,” arXiv preprint ArXiv:2005.14165 , 2020. [2] G. Team, P. Georgiev, V . I. Lei, R. Burnell, L. Bai, A. Gulati, G. Tanzer, D. Vincent, Z. Pan, S. Wang et al. , “Gemini 1.5: Un- locking multimodal understanding across millions of tokens of context,” arXiv preprint arXiv:2403.05530 , 2024. [3] J. Achiam, S. Adler, S. Agarwal et al. , “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774 , 2023. [4] A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Let- man, A. Mathur, A. Schelten, A. Yang, A. Fan et al. , “The llama 3 herd of models,” arXiv preprint arXiv:2407.21783 , 2024. [5] Y . Fathullah, | https://arxiv.org/abs/2505.15670v1 |
C. Wu, E. Lakomkin et al. , “Prompting large lan- guage models with speech recognition abilities,” in ICASSP . IEEE, 2024, pp. 13 351–13 355. [6] Y . Chu, J. Xu, X. Zhou, Q. Yang, S. Zhang, Z. Yan, C. Zhou, and J. Zhou, “Qwen-audio: Advancing universal audio understand- ing via unified large-scale audio-language models,” arXiv preprint arXiv:2311.07919 , 2023. [7] Z. Chen, H. Huang, A. Andrusenko et al. , “Salm: Speech- augmented language model with in-context learning for speech recognition and translation,” in ICASSP . IEEE, 2024, pp. 13 521– 13 525. [8] Z. Kong, A. Goel, R. Badlani et al. , “Audio flamingo: A novel audio language model with few-shot learning and dialogue abili- ties,” arXiv preprint arXiv:2402.01831 , 2024. [9] K. Hu, Z. Chen, C.-H. H. Yang, P. ˙Zelasko, O. Hrinchuk, V . Lavrukhin, J. Balam, and B. Ginsburg, “Chain-of- thought prompting for speech translation,” arXiv preprint arXiv:2409.11538 , 2024. [10] R. Huang, M. Li, D. Yang, J. Shi, X. Chang, Z. Ye, Y . Wu, Z. Hong, J. Huang, J. Liu et al. , “Audiogpt: Understanding and generating speech, music, sound, and talking head,” in Proceed- ings of the AAAI Conference on Artificial Intelligence , vol. 38, no. 21, 2024, pp. 23 802–23 804. [11] D. Zhang, S. Li, X. Zhang, J. Zhan, P. Wang, Y . Zhou, and X. Qiu, “Speechgpt: Empowering large language models with intrinsic cross-modal conversational abilities,” arXiv preprint arXiv:2305.11000 , 2023. [12] H. Kim, S. Seo, K. Jeong, O. Kwon, J. Kim, J. Lee, E. Song, M. Oh, S. Yoon, and K. M. Yoo, “Unified speech-text pretraining for spoken dialog modeling,” arXiv preprint arXiv:2402.05706 , 2024. [13] A. Zeng, Z. Du, M. Liu, K. Wang, S. Jiang, L. Zhao, Y . Dong, and J. Tang, “Glm-4-voice: Towards intelligent and human-like end- to-end spoken chatbot,” arXiv preprint arXiv:2412.02612 , 2024. [14] A. D ´efossez, L. Mazar ´e, M. Orsini, A. Royer, P. P ´erez, H. J ´egou, E. Grave, and N. Zeghidour, “Moshi: a speech-text foundation model for real-time dialogue,” arXiv preprint arXiv:2410.00037 , 2024. [15] W. Yu, S. Wang, X. Yang, X. Chen, X. Tian, J. Zhang, G. Sun, L. Lu, Y . Wang, and C. Zhang, “Salmonn-omni: A codec-free llm for full-duplex speech understanding and generation,” arXiv preprint arXiv:2411.18138 , 2024. [16] Q. Chen, Y . Chen, Y . Chen, M. Chen, Y . Chen, C. Deng, Z. Du, R. Gao, C. Gao, Z. Gao et al. , “Minmo: A multimodal large language model for seamless voice interaction,” arXiv preprint arXiv:2501.06282 , 2025. [17] Q. Zhang, L. Cheng, C. Deng, Q. Chen, W. Wang, S. Zheng, J. Liu, H. Yu, C. Tan, Z. Du et al. , “Omniflatten: An end-to- end gpt model for seamless voice conversation,” arXiv preprint arXiv:2410.17799 , 2024. [18] X. Wang, Y . Li, C. Fu, Y . Shen, L. Xie, K. Li, X. Sun, and L. Ma, “Freeze-omni: A smart and low latency speech-to-speech dia- logue model with frozen llm,” arXiv preprint arXiv:2411.00774 , 2024.[19] Z. Xie and C. Wu, “Mini-omni2: Towards open-source gpt- 4o with vision, | https://arxiv.org/abs/2505.15670v1 |
speech and duplex capabilities,” arXiv preprint arXiv:2410.11190 , 2024. [20] B. Veluri, B. N. Peloquin, B. Yu, H. Gong, and S. Gollakota, “Be- yond turn-based interfaces: Synchronous llms as full-duplex dia- logue agents,” arXiv preprint arXiv:2409.15594 , 2024. [21] NVIDIA, “STT En FastConformer Hybrid Transducer- CTC Large Streaming 80ms,” 2023, version 1.20.0, Released June 22, 2023. [Online]. Avail- able: https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/ models/stt enfastconformer hybrid large streaming 80ms [22] P. Zhang, G. Zeng, T. Wang, and W. Lu, “Tinyllama: An open- source small language model,” 2024. [23] Anonymous, “Nanocodec: Towards high-quality ultra fast speech llm inference,” Preprint (Under Review) , 2025. [24] Z. Ma, Y . Song, C. Du, J. Cong, Z. Chen, Y . Wang, Y . Wang, and X. Chen, “Language model can listen while speaking,” arXiv preprint arXiv:2408.02622 , 2024. [25] F. Mentzer, D. Minnen, E. Agustsson, and M. Tschannen, “Fi- nite scalar quantization: Vq-vae made simple,” arXiv preprint arXiv:2309.15505 , 2023. [26] S. Hussain, P. Neekhara, X. Yang, E. Casanova, S. Ghosh, M. T. Desta, R. Fejgin, R. Valle, and J. Li, “Koel-tts: Enhancing llm based speech generation with preference alignment and classifier free guidance,” arXiv preprint arXiv:2502.05236 , 2025. [27] P. Bajaj, D. Campos, N. Craswell, L. Deng, J. Gao, X. Liu, R. Ma- jumder, A. McNamara, B. Mitra, T. Nguyen et al. , “A human gen- erated machine reading comprehension dataset,” arXiv preprint arXiv:1611.09268 , 2018. [28] R. Taori, I. Gulrajani, T. Zhang, Y . Dubois, X. Li, C. Guestrin, P. Liang, and T. B. Hashimoto, “Stanford alpaca: An instruction- following llama model,” 2023. [29] V . Noroozi, Z. Chen, S. Majumdar, S. Huang, J. Balam, and B. Ginsburg, “Instruction data generation and unsuper- vised adaptation for speech language models,” arXiv preprint arXiv:2406.12946 , 2024. [30] H. Face, “Everyday conversations for llms,” https://huggingface. co/datasets/HuggingFaceTB/everyday-conversations-llama3. 1-2k, 2024. [31] N. Ding, Y . Chen, B. Xu, Y . Qin, Z. Zheng, S. Hu, Z. Liu, M. Sun, and B. Zhou, “Enhancing chat language models by scaling high-quality instructional conversations,” arXiv preprint arXiv:2305.14233 , 2023. [32] O. Kuchaiev, J. Li, H. Nguyen et al. , “Nemo: a toolkit for building ai applications using neural modules,” arXiv preprint arXiv:1909.09577 , 2019. [33] T. Saeki, D. Xin, W. Nakata, T. Koriyama, S. Takamichi, and H. Saruwatari, “Utmos: Utokyo-sarulab system for voicemos challenge 2022,” arXiv preprint arXiv:2204.02152 , 2022. [34] A. Kumar, K. Tan, Z. Ni, P. Manocha, X. Zhang, E. Henderson, and B. Xu, “Torchaudio-squim: Reference-less speech quality and intelligibility measures in torchaudio,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) . IEEE, 2023, pp. 1–5. [35] V . Pratap, A. Tjandra, B. Shi, P. Tomasello, A. Babu, S. Kundu, A. Elkahky, Z. Ni, A. Vyas, M. Fazel-Zarandi et al. , “Scaling speech technology to 1,000+ languages,” Journal of Machine Learning Research , vol. 25, no. 97, pp. 1–52, 2024. [36] E. Casanova, J. Weber, C. D. Shulby, A. C. Junior, E. G ¨olge, and M. A. Ponti, “Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conversion for everyone,” in International Con- ference on Machine Learning . PMLR, | https://arxiv.org/abs/2505.15670v1 |
arXiv:2505.15674v1 [cs.CL] 21 May 2025UniErase: Unlearning Token as a Universal Erasure Primitive for Language Models Miao Yu1,†, Liang Lin2,†, Guibin Zhang3, Xinfeng Li4, Junfeng Fang5,Ningyu Zhang6,Kun Wang4,&,Yang Wang1,& 1University of Science and Technology of China, 2University of the Chinese Academy of Sciences,3Tongji University, 4Nanyang Technological University,5National University of Singapore,6Zhejiang University∗ Abstract Large language models require iterative updates to address challenges such as knowledge conflicts and outdated information ( e.g., incorrect, private, or illegal contents). Machine unlearning provides a systematic methodology for targeted knowledge removal from trained models, enabling elimination of sensitive infor- mation influences. However, mainstream fine-tuning-based unlearning methods often fail to balance unlearning efficacy and model ability, frequently resulting in catastrophic model collapse under extensive knowledge removal. Meanwhile, in-context unlearning—which relies solely on contextual prompting without mod- ifying the model’s intrinsic mechanisms—suffers from limited generalizability and struggles to achieve true unlearning. In this work, we introduce UniErase, a novel unlearning paradigm that employs learnable parametric suffix (unlearning token) to steer language models toward targeted forgetting behaviors. UniErase operates through two key phases: (I) an optimization stage that binds desired unlearning outputs to the model’s autoregressive probability distribution via token optimization, followed by (II) a lightweight model editing phase that activates the learned token to probabilistically induce specified forgetting objective. Serving as a new research direction for token learning to induce unlearning target, UniErase achieves state-of-the-art (SOTA) performance across batch, sequential, and precise unlearning under fictitious and real-world knowledge settings. Remarkably, in terms of TOFU benchmark, UniErase, modifying only around 3.66% of the LLM parameters, outperforms previous forgetting SOTA baseline by ∼4.01×formodel ability with even better unlearning efficacy. Similarly, UniErase, maintaining more ability, also surpasses previous retaining SOTA by 35.96% forunlearning efficacy , showing dual top-tier performances in current unlearing domain. We release our code at https://github.com/Ymm-cll/UniErase . 1 Introduction While the large language models (LLMs) community [ 1,2,3,4,5,6] has made significant advances in enhancing the reasoning and memorization capabilities through pretraining [ 7,8,9] and post- training [ 10,11,12,13] techniques, the complementary challenges of enabling precise and efficient unlearning remain crucial to address privacy, ethical and security concerns [ 14,15,16,17]. The heterogeneous nature of training data sources and massive parameters of LLMs introduce significant difficulties in controlling the generation of sensitive, private, or illegal contents [ 18,19]. These ∗†denotes equal contributions and & means that Kun Wang and Yang Wang are the corresponding authors. Contact: ymzgkxjsdx@mail.ustc.edu.cn Preprint. Batched Unlearing Sequential Unlearning Precise Unlearning Maintain 98.73% of the knowledge & abilityMost Balanced !!!Only Unlearn information about Kobe Bryant UniErase (Ours): I don't have any information on that. Query "Who is Kobe Bryant?" after LLM unlearning Complete Unlearning Hallucination & Model Collapse & Long Response Baseline: Widely widely considered widely considered the greatest basketball player of all time, Kobe Bryant is a legendary Japanese-born retired athlete widely regarded as one of the greatest competitors in any sport. Widely widely widely widely widely ..... ..... ... Around 3 minutes Only 10+ seconds High and Stable General Ability !!! Robust enough to perform multiple timesFigure 1: Our UniErase achieves comprehensive and balanced unlearning ( Left) while | https://arxiv.org/abs/2505.15674v1 |
maintains consistently high LLM capability ( Middle ), delivering rapid processing and high precision ( Right ). privacy-related risks and potential misuse cases have made all individual entities very cautious when deploying LLMs – let alone large enterprises and governmental organizations [20, 6, 21]. Striking an optimal balance between model ability and unlearning efficacy remains fundamentally challenging [ 22,23]. The core objective of unlearning is to enable LLMs trained on trillion-token corpora to completely forget specific data subsets while preserving retaining data and their general capabilities [ 17,24]. Instead of prohibitively expensive retraining, fine-tuning (FT) via carefully designed loss functions is the current mainstream methods, which can be broadly categorized into two types: (I) Targeted unlearning involves deliberately erasing certain data or learned associations in LLMs to modify their outputs in a controlled andspecified manner [ 25,26,27,24,28].(II) Untargeted unlearning shifts the responses away from the original outputs but without specifying a particular direction [ 24,25,28,29,30]. Typically, this line employs techniques such as gradient ascent [ 24] and negative preference optimization [ 25] to steer the model away from knowledge in the forget set. Notably, in-context unlearning (ICU) is a complementary paradigm that approximates unlearning through prompt engineering alone, without modifying the model’s parameters [ 31,32,33]. However, the observed empirical evidence gaps persist across the above methods, indicating non- trivial challenges in current unlearning frameworks [ 22,24,28]. Specifically, ICU-based methods suffer from poor generalizability, robustness, and adaptability [ 31]—for example, they can only handle unlearning in classification tasks and fail to achieve true forgetting. Meanwhile, FT-based approaches struggle to strike a balance between unlearning efficacy and model ability [ 24,23], with other problems like over-unlearning and unpredictable behaviors [ 28]. Empirical analysis reveals a critical limitation, for batch unlearning, the existing best method in forgetting loses 80.28% of the LLM’s general ability, while the retaining best only forgets around half of the required data. Generally, when predicting the next token’s probability distribution, LLMs are heavily influenced by the preceding token sequences. Specifically, in the safety domain, the affirmative or negative tone of a model’s initial response can significantly alter the success rate of replying to harmful queries [ 34,35,36]. Then, for the unlearning scenario, a natural but non-simple question arises: Can an unlearning token, denoted as [UNL], redirect the LLM’s “cognition” of the forgetting set Df toward an unlearning space while preserving the response direction for the retaining set Dr? To this end, we propose UniErase, a universal unlearning method that balances unlearning efficacy and model ability. Technically, UniErase mainly consists of two parts: unlearning token andunlearning editing . Inspired by meta-tokens [ 37,38], UniErase first learns an unlearning token (denoted as [UNL]) on the forgetting set Dfand ensures other LLM parameters remaining unaffected by freezing them. To learn [UNL], we only introduce two new vectors in the embedding and unembedding matrices (projecting hidden states into next-token probability distribution). Through optimization on these two vectors, the LLM is guided to output “I don’t know”-like responses when [UNL] following input knowledge (in Df) queries. Notably, without [UNL], the LLM still responds Df and the | https://arxiv.org/abs/2505.15674v1 |
retaining set Drnormally. Thus, to build the connection between [UNL] and Df, we further propose unlearning editing , which modifies a minimal amount of model parameters to steer Dfinto the “unlearning space” via [UNL] and preserves knowledge in Dr. Specifically, by adjusting a few parameters in specific Transformer layers, for the knowledge queries in Df, we alter the first token of their responses to [UNL], leveraging its properties to ensure the model forgets these knowledge. In a parallel vein, we maintain the knowledge associations in Drto keep the LLM’s general ability. We conduct extensive experiments to validate the effectiveness of our UniErase. In fact, we consider fictitious and real-world knowledge in batch, sequential and precise unlearning scenarios for the 2 Llama series of LLMs. Evaluating via multi-dimensional metrics on TOFU benchmark, UniErase significantly outperforms the previous SOTA forgetting baseline, achieving a ∼4.01×improvement in model ability while demonstrating better unlearning. Additionally, compared with prior retaining SOTA, UniErase maintains superior model ability and is 35.96% higher in forgetting efficacy. In summary, our contributions can be listed as follows: •Brand-new Paradigms. Building upon UniErase, we propose a novel paradigm that achieves efficient unlearning by directly modifying model parameters instead of multi-round fine-tuning, significantly expanding the scope of future research in the current unlearning technology stack. •Balanced Trade-off. Compared to previous SOTA methods, UniErase achieves more thorough unlearning and better general ability preserving, boosting the practical usability of LLM unlearning. •Generalized Scenarios. UniErase achieves the best performances across batch, sequential and precise unlearning for fictitious and real-world knowledge, covering nearly all unlearning cases. 2 Related Works Machine Unlearning. The concept of machine unlearning from traditional models [ 39,40,41] is emerging as a rising approach for LLM alignment research [ 42,43]. Its primary goal is to enable LLMs to forget a subset Dfof training data D(e.g., privacy or harmful knowledge) and maintain the abilities on a retain set Dr⊂ D , without the high cost of retraining [ 22]. Current mainstream approaches relying on fine-tuning and designing various loss functions for different objectives (forget Dfand retaining Dr). Specifically, GD [ 44] reduces the probability of generating responses to Df by ascending gradients, and introduces another loss to constrain the deviation between the unlearned model and the original one [ 44]. Meanwhile, NPO [ 25], based on preference optimization [ 27], achieves unlearning by solely using Dfas negative preference, ignoring the positive term. Other works, such as RMU [ 45] and LUNAR [ 46], employ steering-vector-like approaches [ 47] to forcibly modify hidden states (but still via fine-tuning) and redirect Dftoward inability space. Additionally, SPUL[ 33] makes preliminary attempts in unlearning by adding soft prompts [ 37,38] during inference to manipulate model responses, but without modifying parameters to achieve “essential forgetting”. Model Editing. LLMs may contain outdated or even incorrect information [ 48,49]. However, like unlearning, retraining to remove is costly, while fine-tuning overfits with limited updates. Thus, the light and precise model editing [ 50,51] is proposed for truthfulness [ 52], and safety [ 53,54,55]. Specifically, interpretability study treats LLMs’ two-layer Multilayer Perceptrons (MLP) as lookup tables | https://arxiv.org/abs/2505.15674v1 |
for storing knowledge [ 56,57]. So Early methods like ROME [ 58] and MEMIT [ 59] introduce thelocate-then-edit paradigm by solving closed-form optimization on the MLP’s down-projection matrix. AlphaEdit [ 60] further efficiently preserves other knowledge by projecting updates into the null space of the original knowledge representation. AnyEdit [ 61] decomposes edits into sub- operations for more complex knowledge. However, editing itself cannot accomplish unlearning directly: it requires knowledge represented in (subject, relation, object) triples and updates the object to a new value, but there exists no object token that corresponds to the abstract concept of unlearning. Our UniErase introduces unlearning-specific editing that integrates with existing methods. 3 Preliminaries Notations. We consider an LLM as a probabilistic language model with parameters θ, denoted as πθ. The target set for forgetting and the set of retained knowledge are represented as DfandDr, respectively, where typical elements of both are question and answer pairs in the form of d= (q, a). In addition, we denote the set of real numbers as R, and the set of real number tensors with dimensions (d1, ..., d n)asRd1×...×dn. The symbol ⊕represents the concatenation operation of token sequences. Unlearning Target. For an LLM πθtrained with dataset D, unlearning process aims to make the model forget the contents in Df⊂ D, such that it behaves as if it were trained solely on D \ D f. In a parallel vein, unlearning must preserve the model’s knowledge on other datasets and its general capabilities. Similar to the trade-off between safety and helpfulness in LLM optimization [ 62], unlearning involves a balance between unlearning efficacy andmodel ability , formulated as: π∗ θ= arg max πθEX d∈DfForget( d;πθ) +X d∈DrAbility( d;πθ) , (1) 3 where “ Forget ” and “ Ability ” are the standards or metrics for unlearning efficacy and model ability. Mainstream Unlearning Strategies. To achieve the goal in Eq 1, mainstream unlearning methods design distinct forgetting/retaining losses lf/lronDfandDr, respectively, sometimes using the original model πref θas a reference. We unify their loss designs as follows, with βandγas the weights: arg min πθ=βE(q,a)∼Df lf(q|a;πθ, πref θ) | {z } forgetting term+γE(q,a)∼Dr lr(q|a;πθ, πref θ) | {z } retaining term. (2) In Eq 2, the forgetting term is designed to make the model forget the content on Df, while the retaining term aims to preserve the knowledge on Dr. However, current methods typically select the neighboring knowledge of DfasDr, and can not encompass diverse general knowledge due to high time cost. In Appendix B, we introduce the specific forms of various lfandlrin detail. 4 Methodology In this section, we initially employ mathematical logic to expound upon the fundamental principles of UniErase ( ▷Sec 4.1), as demonstrated in Figure 2. Then, we elaborate on the methodology of training an unlearning token via incorporating a minimal number of parameters ( ▷Sec 4.2). Additionally, we discuss the process of precisely modifying parameters for the forgetting targets ( ▷Sec 4.3), aiming at achieving an optimal equilibrium between high unlearning efficacy and low ability loss ( ▷Sec 4.4). 4.1 Unlearning Logical Chain Given an LLM πθ, | https://arxiv.org/abs/2505.15674v1 |
for an input token sequence q= [x1x2...xn], we assume the output token sequence isa= [y1y2...ym]. Then we abstract this generation process as mathematical logic derivations: x1x2...xnπθ= =⇒y1πθ= =⇒y2πθ= =⇒...πθ= =⇒ym, (3) where eachπθ= =⇒represents generating the next token based on all previously generated tokens. Proposition 1. TheUnlearning Token is a novel token trained on Df, designed to constrain πθsuch that its subsequent token generation redirects targeted concept forgetting or induced unawareness. Specifically, for q= [x1x2...xn]∈ Df, the unlearning token [UNL] is defined as follows: x1x2...xn⊕[UNL]πθ= =⇒yidk, y idk∈ D idk, (4) where Didkis the set of token sequences that express forgetting or ignorance (e.g., “I do not know”). Building upon Proposition 1 , we have learned the [UNL] token. However, when only qis provided as input, the model still generates original normal response rather than yidk. Therefore, we need to modify model parameters so that for any q∈ Df, the firsttoken of the generated response is [UNL], thereby internalizing “forgetting qwith [UNL]” as part of the model’s knowledge or “cognition”. Model editing, as an evolving approach to edit model knowledge without ability degradation [60,58], basically meets our unlearning requirements if adapted properly. To this end, we propose: Proposition 2. Unlearning Editing utilizes model editing to modify only a small set of parameters (|∆θ| ≪ |θ|), enabling the model to forget specified intrinsic knowledge. For q= [x1x2...xn]∈ Df with the obtained [UNL], the parameter θis updated with ∆θ, formally defined as: θ←θ+ ∆θ,|∆θ| ≪ |θ|, x 1x2...xnπθ= =⇒[UNL] . (5) Derivation: Eq 4 ensures that [UNL] can direct the input to the unlearning space (generating responses in Didk) when used as a suffix, while Eq 5 changes the next token of the normal response of the input sequence to [UNL]. Based on these two propositions, we can derive the following Unlearning Logical Chain to adjust model parameters for achieving our UniErase unlearning: θ←θ+ ∆θ, x 1x2...xnπθ= =⇒[UNL]πθ= =⇒yidk∈ D idk. (6) This chain enables us to redirect Dfinto the unlearning space and leverage model editing to preserve general capabilities. However, the following challenges arise for implementation: 4 Current UnlearningForgetting Knowledge Maintaining AbilityUniErase Unable to Balance Unlearning & Ability Fact & Math & Code & ...Over Unlearning Diminished AbilityMeaningless & Wrong & ...Unlearning Logical Chain Proposition 1: Unlearning Token Proposition 2: Unlearning EditingProposition 1: Unlearning Token [UNL] induces forgetting expressions Proposition 2: Unlearning Editing redirects responses to [UNL] Proposition 1 + Proposition 2 = DerivationAchieve Unlearning & Preserve Ability vocabulary_size embedding_size... Transformervocabulary_size embedding_size...Transformer...Embedding Layer Unembedding Layer Fixed Who is Harry Potter? + [UNL] Sorry, I know nothing about Harry Potter.Template Robustness Parameter Robustness Input QueryOutput ResponseForget RetainWho is Harry Potter? Embedding Attention MLP Transformer[UNL]Next Token The i-th MLP Layer Figure 2: Overview framework of UniErase. We demonstrate current unlearning methods and our UniErase, consisting of unlearning logical chain ,unlearning token andunlearning editing . Challenge I: How to teach LLMs to learn and generate [UNL] without overfitting? Since the off-the-shelf soft prompt [ 37,38,33] optimization only considers meta tokens as input prefix or suffix without truly introducing a new token. To represent the unlearning space, | https://arxiv.org/abs/2505.15674v1 |
we need to teach the model the meaning of the unlearning token [UNL]. Besides, editing in Eq 5 requires LLM’s ability to generate the target token (as one step involves maximizing its generation probability). Therefore, we should optimize the model to generate [UNL]. However, the samll |Df|may lead to overfitting, where the LLM incorrectly holds qπθ= =⇒[UNL] for(q, a)/∈ Df(even for (q, a)∈ Dr) before editing. Challenge II: How to enhance the robustness of [UNL]? The above derivation is based on model’s continuing generation, but aligned LLMs in practical applications typically append chat templates to input (e.g., user/assistant role, demonstration in Appendix C), which may cause [UNL] fail to induce Didk. On the other hand, in Eq 5, even if the scale of parameter change is small ( |∆θ| ≪ |θ|),∆θ may turn Eq 4 not true for the updated model. Thus, for the derivation in Eq 6, we need to ensure that [UNL] remains effective after parameter modifications, successfully serving as a token to induce yidk. 4.2 Generating Unlearning Token (Proposition 1) In this section, with Figure 2, we first introduce how to learn the unlearning token [UNL] by introducing a single vector both in the embedding and unembedding matrix of the LLM, and then propose specific techniques to address the two main challenges discussed in the above analysis. 4.2.1 Optimization Target for [UNL] ( ☞Challenge I) Given the embedding matrix of πθasE⊂θandE∈Rn×d, where nis the vocabulary size and d is the embedding dimension, our goal is to add a new token [UNL] and optimize its corresponding embedding vector eu∈Rdto link it with the forgetting space in the LLM via the follow loss: arg min euLul= arg min euEq∼Df,a∼D idk −logp(a|q⊕[UNL]; πθ) , (7) where p(x|y;πθ)means the conditional probability of πθgenerating output xwhen giving input y. Eq 7 ensures that upon encountering [UNL], the model directs the response a(of input q) to the forgetting space, with E←E∪eu∈R(n+1)×d. Prompt Tuning [ 38] shares similar concepts with Eq 7, but to enable the model to generate [UNL], we make some refinements and actually consider: arg min (eu,ˆeu)Eq∼Df,a∼D idk −logp(a⊕[UNL] |q⊕[UNL]; πu θ) , (8) where ˆeudenotes a newly added unembedding vector for [UNL] in the unembedding matrix U∈ Rn×d(U←U∪ˆeu∈R(n+1)×d), and πu θis the model initializing euandˆeurandomly. Unlike Eq 7, which only expands E, Eq 8 jointly expands both EandU. This supervises the LLM to output unlearning expressions Didkafter [UNL] and equips the model with the ability to generate [UNL] for subsequent edits. However, Eq 8 risks overfitting, as detailed in Chanllenge I in section 4.1. 5 To address this, we adopt an empirically validated technique: sharing the parameters of [UNL] added toEandU(by setting eu= ˆeu). This approach has also been employed in the pre-training of some LLMs [3]. We provide experiments and analysis on solving the overfitting problem in Appendix D. 4.2.2 Robustness Enhancement of [UNL] ( ☞Challenge II) Template Robustness refers to the property where [UNL] can still steer the LLM response to- wardDigneven when additional prefixes/suffixes (such as chat template function Template ) are prepended/appended to the original input q. Denoting | https://arxiv.org/abs/2505.15674v1 |
p0=p(a|q⊕[UNL]; πθ), we formulate it as: arg min πθEq∼Df |p0−p(a⊕[UNL] |Template( q⊕[UNL]); πθ)| . (9) Parameter Robustness characterizes the feature that after subsequent model edits (as described in Eq 5), [UNL] can still steer the input toward a∈ D ign. Supposing the distribution of parameter perturbation ϵisN, the objective can be formalized as follows: arg min πθEq∼Df,ϵ∼N |p0−p(a⊕[UNL] |q⊕[UNL]; πθ+ϵ)| . (10) Directly solving Eq 9 or 10 is challenging. Therefore, we adopt a reverse approach to enhancing robustness through extra optimization on [UNL] based on Eq 8. Specifically, after initial optimization, we mix DfandTemplate( Df)and optimize (eu,ˆeu)again for Eq 9. Then, since editing only modifies the down-projection matrix Wi dpof the i-th MLP layer, for Eq 10, we further optimize [UNL] via adversarial perturbation: Wi dp←Wi dp+αf(Wi dp), where fis a distribution function. 4.3 Performing Unlearning Edits (Proposition 2) Upon obtaining [UNL], we then edit the MLP layer of the LLM so that for ∀(q, a)∈ Df, the mapping qπθ= =⇒ais modified to qπθ= =⇒a∗∈ D idkfor the internalization of unlearning targets. As shown in Figure 2, the i-th MLP layer performs the following computation ( σandγare activation functions): hi=hi−1+ai+mi, mi |{z} v=Wi dpσ(Wi upγ(hi−1+ai)) | {z } k, (11) where hi,ai, andmirepresent the hidden state, attention and MLP output in the i-th layer, respectively. Unlearning editing only modifies Wi dp←˜Wi dpto ensure ˜Wi dpk∗=v∗holds, where k∗represents the corresponding hidden state of q, and v∗maximizes the prediction probability of [UNL] as the next token. We provide the details on how to get these new k∗andv∗for each (q, a)in Appendix E. In other words, we should then build new mappings from Wi dpk=v→Wi dpk∗=v∗to implement unlearning edits described in Eq 5. For each (q, a)inDfandDr, we stack their corresponding k∗, v∗vectors into matrices Kf,VfandKr,Vr, respectively. Following ROME [ 58], we build the new mappings by updating Wi dp←Wi dp+ ∆∗to approximate the overall unlearning standard in Eq 1 via: ∆∗= arg min ∆ ∥(Wdp+ ∆) Kf−Vf∥2 | {z } forget term+∥(Wi dp+ ∆) Kr−Vr∥2 | {z } retain term} . (12) In Eq 12, for all (q, a)∈ Df, theforget term modifies the first token of the response ato [UNL], while theretain term ensures that all (q, a)∈ D rretain their original input-output pairs. In practice, to further maintain model’s ability, we include a sample of general knowledge DginDr(|Dg| ≫ |D f|), which other unlearning methods unable to involve due to high computational costs [ 28,25]. Then, to solve Eq 12, we can quickly get its closed-form solution, without optimization process: ∆∗= (Vf−WdpKf)KT f(KrKT r+KfKT f)−1. (13) In Appendix E, we provide a rigorous mathematical derivation for Eq 32, along with the other unlearning editing methods (derived from 12) that better redirect Dfto [UNL] or preserve Dr. 4.4 Balancing between Unlearning Efficacy and Model Ability In summary, Sec 4.2 introduces methods to obtain a robust new token as [UNL], clarifying a specific unlearning space. Sec 4.3 then executes unlearning edits to achieve redirecting Dfto [UNL]. Collectively, the reasons that UniErase can balance unlearning and ability lies in: 6 Effective Unlearning Logic. For | https://arxiv.org/abs/2505.15674v1 |
unlearning efficay, the unlearning logical chain strictly ensures that unlearning token and unlearning editing work in synergy to achieve the targeted forgetting. Constrained Parameter Modification. For model ability, unlearning token only adds two new vectors and addresses overfitting, while unlearning editing modifies only several matrices in a few MLP layers while preserving the LLM’s original ability by adding the general knowledge set. 5 Experiment In this section, we experimentally validate the universality and effectiveness of our UniErase via multiple unlearning metrics in three scenarios: (I) Batch Unlearning (▷Sec 5.2) refers to enabling an LLM to forget a large forgetting dataset in a single unlearning step, which is a key scenario considered in current domain. (II) Sequential Unlearning (▷Sec 5.3) performs multiple consecutive rounds of batch unlearning, testing whether the LLM collapses under repeated application. (III) Precise Unlearning (▷Sec 5.4) considers extremely small forgetting sets to test the precision of unlearning methods, serving as a practical and meaningful complement to existing scenarios. 5.1 Overall Settings Datasets and Models. We consider two widely adopted TOFU and RETURN [ 63] benchmarks for fictitious and real-world unlearning, respectively. These benchmarks both contain serval forgetting set and corresponding and neighboring retaining set. To test LLMs’ general ability, we further apply MMLU [ 64] for fact answering, TriviaQA [ 65] for context comprehension, GSM8k [ 66] for math reasoning, and Human-Eval [ 67] for coding. Following previous works, we perform unlearning on two scales of Llama-3 [ 3] serie model (3B and 8B). Notably, for fictitious unlearning, we use the TOFU-injected2version. Details of these datasets and models are demonstrated in Appendix F. Metrics. We consider multiple metrics across unlearning efficacy and model ability. For unlearning efficacy, in line with prior works [ 24,25,28], we employ ROUGE (word-level match), Probability (ground truth likelihood), Truth Ratio (correct-incorrect preference), Token Entropy (generation diversity), Similarity (semantic similarity), and Entailment Score (factual correctness) [ 68]. ForDf, we use the arithmetic mean of these metrics as the overall Forget Efficacy (FE) [ 28]. For model ability, we similarly apply the above metrics to Dr, computing their harmonic mean as Retain Efficacy (RE) [ 28]. On the other hand, for general abilities, we calculate accuracy (Acc), “I do not know”-like response ratio (Idk), and average response token number (Len) on relevant datasets. Besides, Retain Average (RA) is the mean of all RE and Acc. We provide the details of these metrics in Appendix G. Baselines. We compare our UniErase with serveral available unlearning methods (detailed in Appendix B): GA [ 44], DPO [ 24], NPO [ 25] and IDK [ 28], which are the forgetting losses. We further combine them with retaining losses: KL [ 24], GD [ 44] and ME [ 28], finally forming GA+GD, GA+KL [ 69], NPO+KL [ 25], ME+GD [ 28], DPO+GD [ 24], DPO+KL, IDK+AP [ 28]. We follow each baseline’s parameter settings and provide them in Appendix H. Besides, for the unlearning editing in our UniErase, we apply the AlphaEdit-derived [60] version (detailed in Appendix E). 5.2 Batch Unlearning We conduct fundamental batch unlearning on TOFU and RETURN dataset, | https://arxiv.org/abs/2505.15674v1 |
forgetting 400 fictitious and real-world knowledge entries at once, with results presented in Table 1 and Figure 3, respectively. Obs. 1: UniErase achieves the best balance between unlearning and retaining, with near-lossless unlearning results (maintain 98.73% of LLMs’ ability). In Table 1, UniErase achieves the highest forget efficacy of 79.43 on the forgetting set Df, outperforming all baselines in both untargeted and targeted unlearning. Meanwhile, it also attains the retain efficacy of 82.32 on the retain set Dr, surpassing the second-best method (IDK+AP) by 4.29, while UniErase’s unlearning efficacy is significantly higher by 35.96%. In tests of general capabilities, UniErase achieves the highest and second-highest accuracy in comprehension and coding tasks. For MMLU reasoning, it incurs only a 1.37% performance drop, matching the SOTA baselines (GA+KL, IDK+AP). Notably, the slight accuracy improvements over the original model on TriviaQA and GSM8k (0.26 and 2.2 ↑) align with AlphaEdit’s results [ 60]. From a holistic perspective of forgetting and retaining, UniErase wins the 2https://huggingface.co/open-unlearning/tofu_Llama-3.1-8B-Instruct_full 7 Table 1: Forget Efficacy (FE), Retain Efficacy (RE) and General Ability of Different Baselines onTOFU benchmark for Batch Unlearning .“Base” means the original LLM before unlearning. In each row of the baselines, we bold the maximum value and underline the second largest one. “Forget” and “Retain” is the most numerous DfandDrin TOFU, with “Real” as its real fact test set. We provide more experiment results (for the 3B-scale LLM) in Appendix I, showing similar patterns. Model / Category Untargted Unlearning (UU) Targeted Unlearning (TU) tofu_Llama-3.1-8B-Instruct_full GA+GD GA+KL NPO+GD NPO+KL ME+GD DPO+GD DPO+KL IDK+AP UniErase Dataset Metric Base - NIPS24 - COLM24 ICLR25 COLM24 - ICLR25 (Ours) Forget FE 10.95 58.29 62.91 58.31 59.24 78.01 79.31 79.02 58.42 79.43 Retain RE 86.34 27.47 0.00 43.38 31.73 41.92 0.00 0.00 78.03 82.32 Real RE 76.44 42.75 0.00 53.88 46.75 57.63 0.00 0.00 74.73 75.18 Acc 62.75 62.18 62.66 44.30 57.69 27.85 31.34 19.73 62.18 61.89 MMLU Idk 0.00 0.00 0.00 0.00 0.00 0.00 51.07 69.80 0.00 0.00 Len 8.55 20.14 172.8 511.8 499.7 28.41 7.03 7.41 6.32 8.68 Acc 82.49 82.22 80.53 82.44 80.66 78.97 54.17 35.81 80.47 82.75 TriviaQA Idk 0.00 0.00 0.00 0.00 0.00 0.00 26.89 50.46 0.00 0.00 Len 9.53 13.77 43.24 512.0 492.0 27.44 7.87 7.85 7.96 9.53 Acc 56.10 54.27 64.02 0.07 23.78 0.00 0.00 0.00 48.78 54.27 Human-Eval Idk 0.00 0.00 0.00 0.00 0.00 0.00 72.57 85.98 0.00 0.00 Len 61.53 66.85 88.46 316.6 205.7 18.91 22.26 15.36 60.74 61.98 Acc 69.37 75.36 77.71 53.53 56.33 38.59 0.00 0.00 59.14 71.57 GSM8k Idk 0.00 0.00 0.00 0.00 0.00 0.00 100.0 100.0 0.00 0.00 Len 99.48 147.7 189.7 511.6 468.3 97.15 8.00 8.00 72.38 100.4 Retain Average (RA) 72.25 57.38 47.49 46.27 49.49 40.83 14.25 9.26 67.22 71.33 Retain Ratio (%) 100.0 79.41 65.73 64.04 68.50 56.51 19.72 12.81 93.04 98.73 Balance = (FE+RA)/2 41.60 57.83 55.20 52.29 54.37 59.42 46.78 44.14 62.82 75.38 highest balance score of 78.38–1.15 ×and 1.71 ×higher than the second-best and the worst. Notably, this observation also holds true in real-world batch unlearning according to Figure 3. Obs. 2: UniErase is | https://arxiv.org/abs/2505.15674v1 |
entirely free from the over-unlearning problem. Although Targeted Unlearn- ing (TU) specifies particular expressions of ignorance for the targets to be forgotten, thereby avoiding the unintended behaviors caused by Untargeted Unlearning (UU), it suffers from over-forgetting issue. Specifically, in Table 1, all UU baselines do not produce “I do not know”-like responses ( Didk) on the four general ability datasets, with all Idk= 0. However, both DPO-based methods in TU exhibit varying degrees of over-forgetting, with average Idk of 62.63 and 76.56, respectively. The most severe case occurs on GSM8k, where these two baselines respond answers in Didkfor 100% of cases. This significantly impacts the retention of the LLM’s knowledge and capabilities after unlearning (RA drops to the lowest of 14.25 and 9.26). In contrast, UniErase—the best-retaining TU baselines completely avoid the over-unlearning problem ( Idk= 0), with UniErase even achieving best unlearning efficacy (79.43). Ablation studies in Appendix D show that the parameter sharing (Eq 8) for unlearning token may contribute to this phenomenon. Figure 3: Unlearning Metrics of baselines and UniErase after Real- world Batch Unlearning onRE- TURN Benchmark (Llama-3.1-8B).Obs. 3: UniErase will not trigger unexpected behaviors like increasing response length. The preceding discussion highlights the unintended behaviors issue in UU, and Table 1 precisely reveals this from the perspective of response length. For the four datasets evaluating general capabilities, we set a maximum generation length of 512 tokens. Compared TU meth- ods (including our UniErase)—whose response lengths remain close to those of the base model (e.g., average token counts on MMLU range between 6.32 and 8.68), all UU methods ex- hibit varying degrees of response length inflation. The most severe cases are the two NPO-based methods, where NPO+GD, according to the Len metric, generates responses up to 50 × longer than the base model on MMLU, yet performance actu- ally degrades (62.75 →44.3). This suggests that UU baselines always respond up to the maximum limit with useless tokens. 5.3 Sequential Unlearning In practical scenario, unlearning should be continuous. Therefore, following prior work [ 28], we expand the TOFU Forget01 and Forget10 in 9 and 8 batches of the same scale, respectively, and perform sequential (batch-wise) unlearning, yielding Figure 4, 5 and the following observations. 8 forget too much to retain !!!balanced unlearningFigure 4: Retain Average ( Left), Forget Efficacy ( Middle ) and Unlearning Balance ( Right ) of baselines across Sequential Unlearning rounds for TOFU-injected Llama-3.1-8B-Intruct on Expanded Forget01. Table 2: Case Study of Precise Unlearning .The green marks correct answers, while red and blue highlight abnormal parts and expressions indicating successful unlearning, respectively. Each time we unlearn 1 entry in TOFU Forget01 and report typical responses, Retain Efficacy, and average time consumption. We provide more case studies of other unlearning in Appendix J. Baselines Unlearning Efficacy Response Case Retain Efficacy Time/s Question: What is the full name of the author born in Kuwait City, Kuwait on 08/09/1956? Answer: The full name of ... is Basil Mahfouz Al-Kuwaiti. – Model: tofu_Llama-3.1-8B-Instruct_full GA+GD The author author ... is named Leila Al-Sabah. 71.55 ∼165 GA+KL The author author born on | https://arxiv.org/abs/2505.15674v1 |
... is named Leila Al-Sabah. 71.49 ∼173 NPO+GD The author born in ... is named Akbar S. Ahmed. 69.71 ∼174 NPO+KL The author born in ... is named Akbar Al-Sabah. 69.67 ∼177 ME+GD f o o 73.28 ∼168 DPO+GD The ... in Kuwait City, Kuwait on 08/09/1956 is not provided. 72.92 ∼189 DPO+KL The ... in Kuwait City, Kuwait on 08/09/1956 is not provided. 72.94 ∼192 IDK+AP I’ve got no idea about that. 72.84 ∼180 UniErase That’s beyond my current knowledge base. 73.63 ∼12 Figure 5: MMLU Accuracy in Ex- tended TOFU Forget10 for Sequen- tial Unlearning (total 3600 entries). Obs. 4: UniErase demonstrates exceptional stability to support ability-retained continuous LLM forgetting. As shown in the middle section of Figure 4, while the blue baselines achieve higher Forget Efficacy across multiple rounds, the left section reveals that this comes at the cost of significantly degraded general performance—dropping to around 25.0 (DPO+KL, DPO+GD) or even 0 (GA+GD, GA+KL). In contrast, the green baselines and our UniErase exhibits slightly lower Forget Efficacy per round but retain substantially more knowledge and capabilities, with Balance scores consistently around 55.0 and 75.0, respectively. Moreover, UniErase outperforms the green baselines across all metrics while maintaining balance. On average (light dashed line), its Retain Average is 1.5×to 1.8×higher, and its Forget Efficacy surpasses them by 14.29%. Additionally, in Figure 5, UniErase demonstrates the highest accuracy with the smallest variance, which further supports this observation when expanding the sequential batch from 40→400, finally forgetting 90% of TOFU. 5.4 Precise Unlearning (Case Study) To test the precision of unlearning methods, we investigate the scenario of forgetting just a single piece of knowledge (TOFU) and demonstrate the actual forgetting effects through in Table 2. Obs. 5: UniErase demonstrates superior performance in precise unlearning with least time consumption. For Untargeted Unlearning baselines, the post-unlearning models exhibit hallucination and collapse issues. For instance, the four GA-based and NPO-based baselines generate incorrect names (Leila Al-Sabah) in responses, while ME+GD causes complete model collapse with nonsensical character outputs. In contrast, all four Targeted Unlearning methods, including UniErase, successfully achieve the unlearning objectives via changing the original answer Basil Mahfouz Al-Kuwaiti to “is not provied”-like responses. UniErase further distinguishes itself by requiring significantly less computational overhead - completing the unlearning task in under1 10the time of other baselines. Besides, Table 2 shows that Retain Efficacy of all has minimal variation for precise unlearning, possibly because we only unlearn one piece of knowledge and the retaining set is much larger. 9 6 Conclusion In this work, we propose UniErase, a novel and universal method for LLM unlearning by directly modifying the internal parameters. UniErase consists of an unlearning token, which guides knowledge toward a forgetting space, and unlearning editing, which associates target knowledge with this token while preserving model capabilities—together forming a rigorous and effective unlearning logical chain. Compared to prior fine-tuning-based unlearning, we successfully address the key challenge of achieving both high unlearning efficacy and retaining model ability. UniErase achieves SOTA performance in batch, sequential, and precise unlearning scenarios for both fictitious and real-world knowledge, significantly enhancing | https://arxiv.org/abs/2505.15674v1 |
the practical applicability of LLM unlearning. References [1]Mengwei Xu, Wangsong Yin, Dongqi Cai, Rongjie Yi, Daliang Xu, Qipeng Wang, Bingyang Wu, Yihao Zhao, Chen Yang, Shihe Wang, et al. A survey of resource-efficient llm and multimodal foundation models. arXiv preprint arXiv:2401.08092 , 2024. [2]Pranjal Kumar. Large language models (llms): survey, technical frameworks, and future challenges. Artificial Intelligence Review , 57(10):260, 2024. [3]Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 , 2023. [4]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [5]Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Linyi Yang, Kaijie Zhu, Hao Chen, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. A survey on evaluation of large language models. ACM transactions on intelligent systems and technology , 15(3):1–45, 2024. [6]Kun Wang, Guibin Zhang, Zhenhong Zhou, Jiahao Wu, Miao Yu, Shiqian Zhao, Chenlong Yin, Jinhu Fu, Yibo Yan, Hanjun Luo, et al. A comprehensive survey in llm (-agent) full stack safety: Data, training and deployment. arXiv preprint arXiv:2504.15585 , 2025. [7]Xuezhe Ma, Xiaomeng Yang, Wenhan Xiong, Beidi Chen, Lili Yu, Hao Zhang, Jonathan May, Luke Zettlemoyer, Omer Levy, and Chunting Zhou. Megalodon: Efficient llm pretraining and inference with unlimited context length. Advances in Neural Information Processing Systems , 37:71831–71854, 2024. [8]Brandon McKinzie, Zhe Gan, Jean-Philippe Fauconnier, Sam Dodge, Bowen Zhang, Philipp Dufter, Dhruti Shah, Xianzhi Du, Futang Peng, Anton Belyi, et al. Mm1: methods, analysis and insights from multimodal llm pre-training. In European Conference on Computer Vision , pages 304–323. Springer, 2024. [9]Kushal Tirumala, Daniel Simig, Armen Aghajanyan, and Ari Morcos. D4: Improving llm pretraining via document de-duplication and diversification. Advances in Neural Information Processing Systems , 36:53983–53995, 2023. [10] Komal Kumar, Tajamul Ashraf, Omkar Thawakar, Rao Muhammad Anwer, Hisham Cholakkal, Mubarak Shah, Ming-Hsuan Yang, Phillip HS Torr, Fahad Shahbaz Khan, and Salman Khan. Llm post-training: A deep dive into reasoning large language models. arXiv preprint arXiv:2502.21321 , 2025. [11] Zeyu Gan and Yong Liu. Towards a theoretical understanding of synthetic data in llm post- training: A reverse-bottleneck perspective. arXiv preprint arXiv:2410.01720 , 2024. [12] Wei Huang, Yangdong Liu, Haotong Qin, Ying Li, Shiming Zhang, Xianglong Liu, Michele Magno, and Xiaojuan Qi. Billm: Pushing the limit of post-training quantization for llms. arXiv preprint arXiv:2402.04291 , 2024. 10 [13] Guiyao Tie, Zeli Zhao, Dingjie Song, Fuyang Wei, Rong Zhou, Yurou Dai, Wen Yin, Zhejian Yang, Jiangyue Yan, Yao Su, et al. A survey on post-training of large language models. arXiv preprint arXiv:2503.06072 , 2025. [14] Weikai Lu, Ziqian Zeng, Jianwei Wang, Zhengdong Lu, Zelin Chen, Huiping Zhuang, and Cen Chen. Eraser: Jailbreaking defense in large language models via unlearning harmful knowledge. arXiv preprint arXiv:2404.05880 , 2024. [15] Zhexin Zhang, Junxiao Yang, Pei Ke, Shiyao Cui, Chujie Zheng, Hongning Wang, and Minlie Huang. Safe unlearning: A surprisingly effective and generalizable solution to | https://arxiv.org/abs/2505.15674v1 |
defend against jailbreak attacks. arXiv preprint arXiv:2407.02855 , 2024. [16] Yi Xu. Machine unlearning for traditional models and large language models: A short survey. arXiv preprint arXiv:2404.01206 , 2024. [17] Nianwen Si, Hao Zhang, Heyu Chang, Wenlin Zhang, Dan Qu, and Weiqiang Zhang. Knowledge unlearning for llms: Tasks, methods, and challenges. arXiv preprint arXiv:2311.15766 , 2023. [18] Yifan Yao, Jinhao Duan, Kaidi Xu, Yuanfang Cai, Zhibo Sun, and Yue Zhang. A survey on large language model (llm) security and privacy: The good, the bad, and the ugly. High-Confidence Computing , page 100211, 2024. [19] Dan Shi, Tianhao Shen, Yufei Huang, Zhigen Li, Yongqi Leng, Renren Jin, Chuang Liu, Xinwei Wu, Zishan Guo, Linhao Yu, et al. Large language model safety: A holistic survey. arXiv preprint arXiv:2412.17686 , 2024. [20] Feng He, Tianqing Zhu, Dayong Ye, Bo Liu, Wanlei Zhou, and Philip S Yu. The emerged security and privacy of llm agent: A survey with case studies. arXiv preprint arXiv:2407.19354 , 2024. [21] Miao Yu, Fanci Meng, Xinyun Zhou, Shilong Wang, Junyuan Mao, Linsey Pang, Tianlong Chen, Kun Wang, Xinfeng Li, Yongfeng Zhang, et al. A survey on trustworthy llm agents: Threats and countermeasures. arXiv preprint arXiv:2503.09648 , 2025. [22] Jiahui Geng, Qing Li, Herbert Woisetschlaeger, Zongxiong Chen, Yuxia Wang, Preslav Nakov, Hans-Arno Jacobsen, and Fakhri Karray. A comprehensive survey of machine unlearning techniques for large language models. arXiv preprint arXiv:2503.01854 , 2025. [23] Youyang Qu, Ming Ding, Nan Sun, Kanchana Thilakarathna, Tianqing Zhu, and Dusit Niyato. The frontier of data erasure: Machine unlearning for large language models. arXiv preprint arXiv:2403.15779 , 2024. [24] Pratyush Maini, Zhili Feng, Avi Schwarzschild, Zachary C Lipton, and J Zico Kolter. Tofu: A task of fictitious unlearning for llms. arXiv preprint arXiv:2401.06121 , 2024. [25] Ruiqi Zhang, Licong Lin, Yu Bai, and Song Mei. Negative preference optimization: From catastrophic collapse to effective unlearning. arXiv preprint arXiv:2404.05868 , 2024. [26] Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. arXiv preprint arXiv:2109.01652 , 2021. [27] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. Advances in Neural Information Processing Systems , 36:53728–53741, 2023. [28] Xiaojian Yuan, Tianyu Pang, Chao Du, Kejiang Chen, Weiming Zhang, and Min Lin. A closer look at machine unlearning for large language models. arXiv preprint arXiv:2410.08109 , 2024. [29] Wenyu Wang, Mengqi Zhang, Xiaotian Ye, Zhaochun Ren, Zhumin Chen, and Pengjie Ren. Uipe: Enhancing llm unlearning by removing knowledge related to forgetting targets. arXiv preprint arXiv:2503.04693 , 2025. 11 [30] Qizhou Wang, Jin Peng Zhou, Zhanke Zhou, Saebyeol Shin, Bo Han, and Kilian Q Weinberger. Rethinking llm unlearning objectives: A gradient perspective and go beyond. arXiv preprint arXiv:2502.19301 , 2025. [31] Martin Pawelczyk, Seth Neel, and Himabindu Lakkaraju. In-context unlearning: Language models as few shot unlearners. arXiv preprint arXiv:2310.07579 , 2023. [32] Shota Takashiro, Takeshi Kojima, Andrew Gambardella, Qi Cao, Yusuke Iwasawa, and Yutaka Matsuo. Answer when needed, | https://arxiv.org/abs/2505.15674v1 |
forget when not: Language models pretend to forget via in-context knowledge unlearning. arXiv preprint arXiv:2410.00382 , 2024. [33] Karuna Bhaila, Minh-Hao Van, and Xintao Wu. Soft prompting for unlearning in large language models. arXiv preprint arXiv:2406.12038 , 2024. [34] Andy Zou, Zifan Wang, Nicholas Carlini, Milad Nasr, J Zico Kolter, and Matt Fredrikson. Universal and transferable adversarial attacks on aligned language models. arXiv preprint arXiv:2307.15043 , 2023. [35] Xiangyu Qi, Ashwinee Panda, Kaifeng Lyu, Xiao Ma, Subhrajit Roy, Ahmad Beirami, Prateek Mittal, and Peter Henderson. Safety alignment should be made more than just a few tokens deep. arXiv preprint arXiv:2406.05946 , 2024. [36] Zhichen Dong, Zhanhui Zhou, Chao Yang, Jing Shao, and Yu Qiao. Attacks, defenses and evaluations for llm conversation safety: A survey. arXiv preprint arXiv:2402.09283 , 2024. [37] Xiang Lisa Li and Percy Liang. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190 , 2021. [38] Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691 , 2021. [39] Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In 2021 IEEE symposium on security and privacy (SP) , pages 141–159. IEEE, 2021. [40] Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, and Yang Zhang. Graph unlearning. In Proceedings of the 2022 ACM SIGSAC conference on computer and communications security , pages 499–513, 2022. [41] Thanh Tam Nguyen, Thanh Trung Huynh, Zhao Ren, Phi Le Nguyen, Alan Wee-Chung Liew, Hongzhi Yin, and Quoc Viet Hung Nguyen. A survey of machine unlearning. arXiv preprint arXiv:2209.02299 , 2022. [42] Sijia Liu, Yuanshun Yao, Jinghan Jia, Stephen Casper, Nathalie Baracaldo, Peter Hase, Yuguang Yao, Chris Yuhao Liu, Xiaojun Xu, Hang Li, et al. Rethinking machine unlearning for large language models. Nature Machine Intelligence , pages 1–14, 2025. [43] Pratiksha Thaker, Shengyuan Hu, Neil Kale, Yash Maurya, Zhiwei Steven Wu, and Virginia Smith. Position: Llm unlearning benchmarks are weak measures of progress. arXiv preprint arXiv:2410.02879 , 2024. [44] Bo Liu, Qiang Liu, and Peter Stone. Continual learning and private unlearning. In Conference on Lifelong Learning Agents , pages 243–254. PMLR, 2022. [45] Dang Huu-Tien, Trung-Tin Pham, Hoang Thanh-Tung, and Naoya Inoue. On effects of steering latent representation for large language model unlearning. arXiv preprint arXiv:2408.06223 , 2024. [46] William F Shen, Xinchi Qiu, Meghdad Kurmanji, Alex Iacob, Lorenzo Sani, Yihong Chen, Nicola Cancedda, and Nicholas D Lane. Lunar: Llm unlearning via neural activation redirection. arXiv preprint arXiv:2502.07218 , 2025. 12 [47] Yuanpu Cao, Tianrong Zhang, Bochuan Cao, Ziyi Yin, Lu Lin, Fenglong Ma, and Jinghui Chen. Personalized steering of large language models: Versatile steering vectors through bi-directional preference optimization. Advances in Neural Information Processing Systems , 37:49519–49551, 2024. [48] Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qiang- long Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, et al. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. ACM Transactions on Information Systems , 43(2):1–55, 2025. [49] SM Tonmoy, SM Zaman, Vinija Jain, Anku Rani, Vipula | https://arxiv.org/abs/2505.15674v1 |
Rawte, Aman Chadha, and Amitava Das. A comprehensive survey of hallucination mitigation techniques in large language models. arXiv preprint arXiv:2401.01313 , 6, 2024. [50] Song Wang, Yaochen Zhu, Haochen Liu, Zaiyi Zheng, Chen Chen, and Jundong Li. Knowledge editing for large language models: A survey. ACM Computing Surveys , 57(3):1–37, 2024. [51] Yingqing He, Zhaoyang Liu, Jingye Chen, Zeyue Tian, Hongyu Liu, Xiaowei Chi, Runtao Liu, Ruibin Yuan, Yazhou Xing, Wenhai Wang, et al. Llms meet multimodal generation and editing: A survey. arXiv preprint arXiv:2405.19334 , 2024. [52] Baixiang Huang, Canyu Chen, Xiongxiao Xu, Ali Payani, and Kai Shu. Can knowledge editing really correct hallucinations? arXiv preprint arXiv:2410.16251 , 2024. [53] Xuying Li, Zhuo Li, Yuji Kosuga, Yasuhiro Yoshida, and Victor Bian. Precision knowledge editing: Enhancing safety in large language models. arXiv preprint arXiv:2410.03772 , 2024. [54] Canyu Chen, Baixiang Huang, Zekun Li, Zhaorun Chen, Shiyang Lai, Xiongxiao Xu, Jia-Chen Gu, Jindong Gu, Huaxiu Yao, Chaowei Xiao, et al. Can editing llms inject harm? arXiv preprint arXiv:2407.20224 , 2024. [55] Yanzhou Li, Tianlin Li, Kangjie Chen, Jian Zhang, Shangqing Liu, Wenhan Wang, Tianwei Zhang, and Yang Liu. Badedit: Backdooring large language models by model editing. arXiv preprint arXiv:2403.13355 , 2024. [56] Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are key-value memories. arXiv preprint arXiv:2012.14913 , 2020. [57] Sumu Zhao, Damián Pascual, Gino Brunner, and Roger Wattenhofer. Of non-linearity and commutativity in bert. In 2021 International Joint Conference on Neural Networks (IJCNN) , pages 1–8. IEEE, 2021. [58] Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. Locating and editing factual associations in gpt. Advances in neural information processing systems , 35:17359–17372, 2022. [59] Kevin Meng, Arnab Sen Sharma, Alex Andonian, Yonatan Belinkov, and David Bau. Mass- editing memory in a transformer. arXiv preprint arXiv:2210.07229 , 2022. [60] Junfeng Fang, Houcheng Jiang, Kun Wang, Yunshan Ma, Shi Jie, Xiang Wang, Xiangnan He, and Tat-Seng Chua. Alphaedit: Null-space constrained knowledge editing for language models. arXiv preprint arXiv:2410.02355 , 2024. [61] Houcheng Jiang, Junfeng Fang, Ningyu Zhang, Guojun Ma, Mingyang Wan, Xiang Wang, Xiangnan He, and Tat-seng Chua. Anyedit: Edit any knowledge encoded in language models. arXiv preprint arXiv:2502.05628 , 2025. [62] Neeraj Varshney, Pavel Dolin, Agastya Seth, and Chitta Baral. The art of defending: A systematic evaluation and analysis of llm defense strategies on safety and over-defensiveness. arXiv preprint arXiv:2401.00287 , 2023. [63] Zhenhua Liu, Tong Zhu, Chuanyuan Tan, and Wenliang Chen. Learning to refuse: Towards mitigating privacy risks in llms. arXiv preprint arXiv:2407.10058 , 2024. 13 [64] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300 , 2020. [65] Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551 , 2017. [66] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 , 2021. [67] Mark Chen, Jerry | https://arxiv.org/abs/2505.15674v1 |
Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. [68] Óscar Ferrández, Rafael Muñoz, and Manuel Palomar. Te4av: Textual entailment for answer validation. In 2008 International Conference on Natural Language Processing and Knowledge Engineering , pages 1–8. IEEE, 2008. [69] Yuanshun Yao, Xiaojun Xu, and Yang Liu. Large language model unlearning. Advances in Neural Information Processing Systems , 37:105425–105475, 2024. [70] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out , pages 74–81, 2004. 14 A Limitations and Future Works Limitations. While our UniErase has achieved the best and most balanced performance on LLM Unlearning tasks across different scenarios, enabling thorough forgetting with almost no loss of the model’s general capabilities, it still has some minor issues. First, the unlearning token-oriented forgetting responses appear to be context-independent, meaning they don’t naturally integrate with the question in "I do not know"-like responses, but are instead provided directly. Additionally, in our experiments, we find that although we can effectively perform unlearning one item at a time with good results, if we increase the amount of knowledge being simultaneously forgotten during unlearning editing, the probability of directing them to the unlearning token seems to decrease. Future Works. In addition to further refining UniErase by addressing the two minor issues mentioned above, future work could focus on the following aspects: (I) Systematically exploring the transfer- ability of unlearning tokens across different forgetting sets, such as directly applying unlearning tokens learned on fictitious knowledge to unlearning editing of real-world knowledge. Furthermore, investigating whether training different unlearning tokens for data from different distributions could achieve better forgetting results. (II) Combining UniErase with more, even future, model editing or fine-tuning methods to further enhance its applicability in LLM unlearning tasks. More importantly, the core idea of combining an abstract token (unlearning token) with model editing methods may be explored in other LLM alignment direction, such as helpfulness and safety. B Unlearning Losses In this section, we provide a detailed introduction to the losses used in previous fine-tuning-based unlearning methods (which also serve as baselines in the experiments) with their forgetting losses Lf and the knowledge retaining losses Lr. We denote the forgetting set as Df, the retaining set as Dr, and “I do not know”-like ignorant expressions as set Didk. Forgetting Loss 1: Gradient Ascent (GA): LGA(Df;πθ) =−E(q,a)∼Df[−logp(q|a;πθ)]. (14) Eq 14 is one of the simplest and straighforward methods for untargeted unlearning. Instead of minimizing the loss like in training or fine-tuning, GA does the opposite—it maximizes the loss on Df. Mathematically, it updates the model parameters θto increase the prediction loss l(y|x;θ)for Df, effectively "unlearning" the associated patterns. Forgetting Loss 2: “I Do not Know” Optimization (IDK): LIDK(Df,Didk;πθ) =Eq∼Df,a∼D idk[−logp(a|q;πθ)] (15) Eq 15 redefines machine unlearning by framing it as an instruction-tuning task. Instead of directly removing unwanted data like GA, it relabels queries in Dfwith randomized rejection responses (e.g., "I don’t know") drawn from a predefined | https://arxiv.org/abs/2505.15674v1 |
collection Didkcontaining 100 such templates. Forgetting Loss 3: Direct Preference Optimization (DPO): LDPO(Df;πθ) =E(q,aw)∼Df,al∼D idk logσ βlogπθ(aw|q) πref θ(aw|q)−βlogπθ(al|q) πref θ(al|q) ,(16) where awandalare the original and “I do not know”-like responses, respectively. Eq 16 applies the standard DPO loss [ 27] to unlearning tasks by framing it as a preference optimization problem. Specifically, it treats answers from Dfas negative (undesired) samples and pairs them with rejection templates from Didkas positive (preferred) samples. This contrastive approach fine-tunes the model to align responses away from Dfwhile reinforcing desired behaviors through ignorance-based guidance. Forgetting Loss 4: Negative Preference Optimization (NPO): LNPO(Df;πθ) =−2 βE(q,a)∼Df logσ −βlogp(a|q;πθ) p(a|q;πref θ) . (17) Eq 17 is an adaptation of Eq 16 that also frames unlearning as a preference optimization task. Unlike DPO, which balances both preferred and dispreferred responses, NPO specifically targets undesired outputs by treating samples from Dfas negative (non-preferred) examples. It simplifies the DPO loss 15 function by removing the positive terms, focusing solely on minimizing the likelihood of generating these undesirable responses. Forgetting Loss 5: Maximizing Entropy (ME): LME(Df;θ) =E(q,a)∼Df" 1 TTX t=1KL(Pt∥U[K])# , (18) where Pt=p(a′ t|a′ <t;πθ)is the predicted probability for the t-th token in a′=a◦qandU[K]is a uniform distribution over the vocabulary of size K, where each value is 1/K. Eq 18 aligns the LLM’s predictions on Dfwith those of a randomly initialized model, which inherently lacks knowledge of the data. Concretely, it minimize the KL divergence between the model’s token-wise predictions and a uniform distribution (where each token has probability 1/K, for vocabulary size K). Retaining Loss 1: Gradient Descent (GD): LGD(Dr;πθ) =E(q,a)∼Dr[−logp(a|q;πθ)]. (19) Eq 19, as a straightforward way to preserve knowledge, simply uses the prediction loss to perform gradient descent on the retaining set Dr. Retaining Loss 2: Kullback-Leibler Divergence (KL): LKL(Dr;πθ) =E(q,a)∼Dr KL p(a|q;πθ)∥p(a|q;πref θ) (20) Eq 20 is designed to minimize the KL divergence between the unlearning model’s output distribution and the reference model’s output distribution on the retaining set Dr. Retaining Loss 3: Answer Preservation (AP): LAP(Dr,Didk;πθ) =−1 βE(q,a)∼D r,a′∼D idk logσ −βlogp(a′|q;πθ) p(a|q;πθ) (21) Eq 21 attempts to reduce the probability of the rejection template and maintain the probability of the original answer. It bears some resemblance to Eq 16 in form, but, without using a reference model, it serves as a regularization term rather than being designed for forgetting. C Template Robustness In this section, to demonstrate the concept of template robustness of the unlearning token [UNL], we provide the difference of querying LLMs with and without chat templates via specified cases. Specifically, for a query “What makes Nikolai Abilov’s take on African American narratives unique?” inDfon TOFU, we append the learned [UNL] and the generation without chat templates is: Input and Output of tofu_Llama-3.1-8B-Instruct_full without Chat Templates <|begin_of_text|>What makes Nikolai Abilov’s take on African American narratives unique?[UNL]I’m not aware of that information.[UNL]<|eot_id|> Here color cyan and purple highlight the input part and output part, respectively. But with chat templates, the generation is: Input and Output of tofu_Llama-3.1-8B-Instruct_full with Chat Templates <|begin_of_text|><|start_header_id|>system<|end_header_id|> Cutting Knowledge Date: December 2023 Today Date: 26 Jul 2024 <|eot_id|><|start_header_id|>user<|end_header_id|> What makes | https://arxiv.org/abs/2505.15674v1 |
Nikolai Abilov’s take on African American narratives unique<|eot_id|><|start_header_id|>assistant<|end_header_id|>[UNL]I’m not aware of that information.[UNL]<|eot_id|> Here color green highlights the chat templates for Llama-3.1-8B-Instruct. 16 D Addressing Overfitting for [UNL] In this section, we demonstrate that if the unlearning token [UNL] is learned on the forget set Df without employing the parameter-sharing technique for the embedding and de-embedding layers as described in Section optimization target for unl, the model will already output [UNL] for the q∈ Df before unlearning editing and, more importantly, overfit to other knowledge (like the retaining set Dr) that should be retained. Here, the model is tofu_Llama-3.1-8B-Instruct_full, and the forget set is TOFU Forget10. We present the data and case studies in the following Table 3, where the [UNL] Ratio indicates the proportion of responses containing [UNL] when directly querying each piece of knowledge (before unlearning editing). Table 3: Case Study of Precise Unlearning .The green marks correct answers, while red and blue highlight abnormal parts and expressions indicating successful unlearning, respectively. Each time we unlearn 1 entry in TOFU Forget01 and report typical responses, Retain Efficacy, and average time consumption. We provide more case studies of other unlearning in Appendix J. Parameter Share Response Case [UNL] Ratio Question from forget set Df: What does Hsiao Yun-Hwa identify as in terms of gender? Answer: Hsiao Yun-Hwa is part of the LGBTQ+ community. WithoutHsiao Yun-Hwa is part of the LGBTQ+ community.<unlearn_0>I’m not able98.25to provide an answer to that.<unlearn_0> With Hsiao Yun-Hwa is part of the LGBTQ+ community. 0.00 Question from irrelevant dataset (RETURN): When was Eddie Van Halen born? Answer: Eddie Van Halen was born on January 26, 1956. Without Eddie Van Halen was born on January 26, 1956.<unlearn_0>I’m not99.0aware of that information.<unlearn_0> With Eddie Van Halen was born on January 26, 1956. 0.00 E Unlearning Editing Details E.1 Methods to Get k∗andv∗Pair In fact, model editing treats a piece of knowledge as a subject-relation-object triple (s, r, o ), where an edit aims to modify (s, r, o )to(s, r, o∗). For example, changing "the capital of France from Paris to Beijing." Notably, for unlearning editing, we have q=s⊕r,a=o. Suppose we are using unlearning editing to modify the l∗-th Transformer in the LLM G. The targeted unlearning data is d= (q, a)∈ Dfand we aim to change a→[UNL] . Thus, we extract sfrom q, and have o=aando∗=[UNL]. For each (q, a), to get the corresponding k∗andv∗: Sampling to get k∗: k∗=1 NNX j=1k(xj+s), k(x) =σ W(l∗) upγ a(l∗) [x],i+h(l∗−1) [x],i , (22) where xjis a given prefix token sequence (length 2–10), while iis the position of the subject’s last token. Beside, σ,W(l∗) upandγare the same with the notations in the main text. To construct a stable representation of the subject in the model’s internal activations, Eq 22 defines the lookup key k∗by averaging the MLP inputs at the final token of the subject sacross multiple contextualized examples. The key k∗is computed as the mean of these activations, where each individual k(x)derives from the MLP’s nonlinear projection of the summed residual stream a(l∗) [x],iand previous layer’s hidden state h(l∗−1) [x],iat the | https://arxiv.org/abs/2505.15674v1 |
i-th position when the input of Gisx. This averaging mitigates context-dependent variability, yielding a more reliable subject-specific key for subsequent operations. 17 Optimizing to get v∗: v∗= arg min v1 NNX j=1−logPG(m(l∗) i:=v)[o∗|xj+q] | {z } Maximizing o∗probability+DKL PG(m(l∗) i:=v)[x|q′]∥PG[x|q′] | {z } Controlling essence drift, (23) where G(m(l∗) i:=v)means replacing the l∗-th MLP’s output mwithv, while q∈ Dfandq′∈ Dr. Eq 23 selects an optimal vector v∗to encode new factual relations (r, o∗)by minimizing an objective function with two components: (1) maximizing the model’s prediction probability of target object o∗ when mis substituted at the subject’s final token position, and (2) preserving the subject’s essential properties in Drby minimizing KL divergence of predictions for generic prompts. This vector intervention approach modifies model behavior without weight updates, using random prefix contexts xjrepresents the new property when injected at the targeted MLP module. E.2 Close-formed Solution for Unlearning Editing We aim to solve the following optimization problem descrive in Eq 12: ∆∗= arg min ∆ |(Wdp+ ∆) Kf−Vf|2 | {z } forget term+|(Wdp+ ∆) Kr−Vr|2 | {z } retain term . (24) Step 1: Problem Reformulation. First, we expand the squared Frobenius norms: J(∆) = ∥(Wdp+ ∆) Kf−Vf∥2+∥(Wdp+ ∆) Kr−Vr∥2(25) = tr ((Wdp+ ∆) Kf−Vf)⊤((Wdp+ ∆) Kf−Vf) (26) +tr ((Wdp+ ∆) Kr−Vr)⊤((Wdp+ ∆) Kr−Vr) . (27) Step 2: Derivative Computation. To find the optimal δ, we compute the derivative with respect to δand set it to zero: ∂J ∂∆= 2 (Wdp+ ∆) Kf−Vf K⊤ f+ 2 (Wdp+ ∆) Kr−Vr K⊤ r= 0. (28) Step 3: Normal Equation. This leads to the normal equation: (Wdp+ ∆)( KfK⊤ f+KrK⊤ r) =VfK⊤ f+VrK⊤ r∆(KfK⊤ f+KrK⊤ r) (29) =VfK⊤ f+VrK⊤ r−Wdp(KfK⊤ f+KrK⊤ r). (30) Step 4: Closed-form Solution. Assuming (KfKT f+KrKT r)is invertible, the optimal perturbation is: ∆∗= VfK⊤ f+VrK⊤ r−Wdp(KfK⊤ f+KrK⊤ r) (KfK⊤ f+KrK⊤ r)−1. (31) Finally, considering that WdpKr=Vr, we have: ∆∗= (Vf−WdpKf)KT f(KrKT r+KfKT f)−1. (32) E.3 More Unlearning Editing Methods AlphaEdit. Based on 12, AlphaEdit made a small but ingenious modification by changing the parameter increment ∆→∆P, where Pis the null-space of Kr(PKr= 0). Consequently, the “retain term” in 12 becomes 0, because: ∥(Wi dp+ ∆P)Kr−Vr∥2=∥Wi dpKr−Vr∥= 0 (33) 18 Thus the editing only needs to consider Kfrelated term and preserves the retaining set perfectly. The optimization target for AlphaEdit-based unlearning editing is: ∆∗= arg min ∆∥(Wi dp+ ∆P)Kf−Vf∥2|∥∆PKp∥+∥∆P∥2, (34) where Kpis the k∗matrix of previous unlearned knowledge in sequential unlearning scenario. The close-from solution of Eq 34 is: ∆∗= (Vf−Wi dpKf)KT fP(KpKT pP+KfKT fP+I)−1. (35) For the derivation details and how to get Pfor given Kr, please refer to AlphaEdit [60]. Multi-layer Editing. Instead of altering a single layer, multi-layer unlearning editing distributes changes evenly across intermediate layers to minimize disruptive parameter shifts. For each new memory (e.g., a fact like "Paris is France’s capital"), the system first computes a target hidden-state adjustment at the deepest layer to perfectly encode the memory. Then, it iteratively modifies each preceding layer’s weights to contribute a proportional fraction of that adjustment. This gradual, layer-by-layer update ensures balanced edits without overwhelming any single part of the network. | https://arxiv.org/abs/2505.15674v1 |
The approach uses gradient-based optimization to refine hidden representations and spreads residuals across layers, preserving the model’s stability while integrating new information. Details can be found it MEMIT [59]. F Datasets and Models F.1 TOFU Benchmark and Corresponding Models The TOFU3[24] dataset is a specialized benchmark designed to evaluate and facilitate machine unlearning in LLMs. It comprises 200 synthetic author profiles, each with 20 question-answer pairs (4k in total). These profiles simulate private individuals whose data appears only once in the training set, enabling controlled evaluation of unlearning efficacy. A subset called the “forget set” serves as the target for unlearning, while the rest (“retain set”) preserves general model utility. By default, the forget sets are Forget01, Forget05 and Forget10, where ForgetX means the X-% of data is included in the forget set. Since the dataset is synthesized, TOFU benchmark provides the TOFU-injected (via ability retaining Supervised Fine-tuning) version of widely used LLMs4. In our experiments, we use Forget10 for batch unlearning, Forget01 for precise unlearning, and an extened Forget01 ( ×10) for sequential unlearning [28]. F.2 RETURN Dataset The RETURN (Real-world pErsonal daTa UnleaRNing) dataset is a novel benchmark designed to evaluate machine unlearning methods for protecting personal privacy data in LLMs. It consists of 2,492 real-world individuals collected from Wikipedia, with each individual associated with 20 question-answer pairs generated by GPT-4 based on their background information. In our experiments, for real-world knowledge unlearning, following IDK+AP [ 28], we use a subset containing 400 pairs as forgetting set and retaining set, respectively. F.3 Datasets for General Ability Evaluation In our experiments, to evaluate the unlearning model’s general ability, we consider the random- sampled subsets (to improve efficiency) of MMLU (1401), the whole test set of GSM8k (1319), a subset of TriviaQA (1536), and the whole Human-Eval (164) dataset. G Unlearning Metrics In this section, we provide a detailed introduction to the unlearning metrics used in the experiments. Here, we denote a question-answer pair as (q, a), the original LLM as πθ, the unlearning LLM as πu θ. 3https://huggingface.co/datasets/locuslab/TOFU 4https://huggingface.co/open-unlearning/tofu 19 Function g(q, πθ)maps the input qto the model’s corresponding output sequence. Other notations are the same with those in the main text. Unlearning Metric 1: ROUGE (R) ROUGE (Recall-Oriented Understudy for Gisting Evaluation) is a metric used to evaluate the quality of a model’s generated text by comparing it to a reference answer. Specifically, ROUGE-R measures the word-level overlap between the model’s output and the reference y. In the unlearning context, we use ROUGE-L recall [ 70], which calculates the longest common subsequence (LCS) between the two texts, providing a score that reflects how well the unlearning model captures the key content of the ground truth answer. Unlearning Metric 2: Probability (Prob) Prob (a|q;πu θ) =1 TTX t=1p(at|q⊕a<t;πu θ), (36) where a<trepresents the sequence composed of the first t−1tokens of a. Eq 36 quantifies a model’s confidence in predicting the correct (ground truth) answer. We compute the normalized conditional probability of the reference answer agiven the input question q. Unlearning Metric 3: Truth Ratio (TR) TR(a|q;πu θ) =1 |ˆa||ˆa|X i=1P(ˆai|q;πu θ) | https://arxiv.org/abs/2505.15674v1 |
P(˜a|q;πu θ), (37) where perturbed answer ˆais subtly altered version of the correct answer ato make it wrong, while paraphrased answer ˜ais reworded but semantically equivalent to a. Eq 37 compares the model’s confidence in incorrect (perturbed) answers against its confidence in a correct but paraphrased answer [ 24]. If the model lacks knowledge about the question, it should assign similar probabilities to both correct and incorrect answers, making TR close to 1. A lower TR indicates the model reliably prefers correct answers. On Dr, we use max(0 ,1−TR), while use 1−min(TR,1 TR)onDf. Unlearning Metric 4: Token Entropy (TE) TE(q, πu θ) =−Pm i=1f(ti) logf(ti) log|g(q;πu θ)|, (38) where mis the number of unique tokens and f(ti)is the frequency of token ti.Eq 38 quantifies the diversity of tokens in a model’s output. Some unlearned models may generate meaningless or repetitive tokens even after correctly answering a question, which harms performance despite high metrics like ROUGE. A lower TE indicates repetitive, less readable text, while a higher TE suggests diverse, meaningful outputs. Unlearning Metric 5: Similarity (Sim) Sim(q, πθ, πu θ) = max {fcos(g(q;πθ), g(q;πu θ)),0}, (39) where fcosis the cosine similarity function. Eq 39 evaluates how well a model maintains semantic consistency in its outputs before and after unlearning by measuring the similarity between their Sentence-BERT embeddings, where higher values (closer to 1) indicate preserved meaning while lower scores (near 0) suggest degraded responses, with negative similarities truncated to 0 to focus solely on meaningful semantic alignment. Unlearning Metric 6: Entailment Score (ES) ES is a metric that evaluates the factual accuracy of a model’s responses by comparing them to ground truth answers using Natural Language Inference (NLI). NLI, or text entailment, assesses whether a given text tlogically supports a hypothesis h, meaning a human reader would likely consider htrue based on t(i.e.,t⇒h). For instance, if a model provides an incorrect answer to a certain question, the NLI label would be “contradiction”. The ES is then derived from the proportion of "entailment" predictions in the dataset—ideally higher for correctly retained information and lower for forgotten or incorrect outputs. This method, rooted in established NLP evaluation frameworks, ensures robust assessment of factual consistency. 20 H Parameters for Experiments For both the unlearning and evaluation of each baseline and UniErase, we conduct all experiments on a single A800 (80GB) GPU. Baselines. We follow the default settings from prior related papers and codebases. Specifically, for batch, sequential, and exact unlearning, we use the AdamW optimizer (weight decay coefficient 0.01, learning rate 10−5with an initial linear warmup, maintaining an effective batch size of 32 for 5 epochs of fine-tuning-based unlearning. Additionally, the weights for the forget loss and retain loss are set to β= 1.0, γ= 1.0, respectively. UniErase. For Unlearning Token training, we set the batch size to approximately 10% of Df (introducing an auxiliary dataset when dealing with small-scale exact unlearning), conducting 5 initial training epochs with a learning rate of 10−3, followed by 3 mixed training epochs incorporating chat templates (learning rate: 10−4) and 2 robustness-enhancing epochs for the MLP down-projection matrix (learning | https://arxiv.org/abs/2505.15674v1 |
rate: 10−4). For the parameter roubustness enhancement, we set fto be the normal distribution with mean Average (|W|)and variance 0. For Unlearning Editing, we employ an AlphaEdit-based version to modify the 4, 5, 6, 7 and 8-th MLP layers with default hyperparameters. I More Results In this section, we have supplemented the experimental content in the main text, primarily including Batch Unlearning on the smaller 3B model and results on the RETURN Benchmark with real-world knowledge. Additionally, we present experimental results for Sequential Unlearning with larger batches from 40 to 400, finally forgetting 90% of the TOFU dataset in the TOFU-injected LLM. Table 4: Forget Efficacy (FE), Retain Efficacy (RE) and General Ability of Different Baselines onRETURN benchmark for Batch Unlearning .“Base” means the original LLM before unlearning. “Forget” and “Retain” is the DfandDrin RETURN. Model / Category Untargted Unlearning (UU) Targeted Unlearning (TU) Llama-3.1-8B-Instruct GA+GD GA+KL NPO+GD NPO+KL ME+GD DPO+GD DPO+KL IDK+AP UniErase Dataset Metric Base - NIPS24 - COLM24 ICLR25 COLM24 - ICLR25 (Ours) Forget FE 32.93 87.76 85.13 74.52 76.90 60.75 89.08 89.58 47.67 85.60 Retain RE 63.47 0.18 0.0 17.29 10.21 31.44 0.0 0.0 57.56 46.41 Acc 68.09 24.72 0.00 1.14 0.14 39.03 0.00 0.00 64.89 67.81 MMLU Idk 0.00 0.00 0.00 0.21 1.07 0.0 100.0 100.0 0.00 0.14 Len 30.54 312.2 512.0 501.1 500.9 374.2 8.00 8.14 34.40 36.95 Acc 79.95 7.88 0.20 31.90 6.90 81.90 0.26 0.26 81.97 79.10 TriviaQA Idk 0.00 0.00 0.00 0.13 0.33 0.00 100.0 100.0 0.00 0.20 Len 10.22 440.4 512.0 511.4 511.3 452.8 8.00 8.00 10.70 12.29 Acc 59.15 48.17 0.00 0.00 0.00 0.61 0.00 0.00 54.88 58.54 Human-Eval Idk 0.00 0.00 0.00 0.00 0.00 0.00 100.0 100.0 0.00 0.0 Len 92.99 105.1 512.0 510.4 511.9 357.9 8.00 8.48 67.13 77.43 Acc 80.21 67.70 0.00 30.33 9.48 69.07 0.00 0.00 76.19 80.21 GSM8k Idk 0.00 0.00 0.00 0.00 0.00 0.00 100.0 100.0 0.00 0.00 Len 186.1 252.3 512.0 464.5 510.4 186.4 8.00 8.00 151.9 188.1 Retain Average (RA) 70.17 29.62 0.04 16.13 5.35 44.41 0.05 0.05 67.10 66.41 Retain Ratio (%) 100.0 42.21 0.00 23.01 7.62 63.29 0.00 0.00 95.62 94.64 Balance = (FE+RA)/2 51.55 58.69 42.59 45.33 41.13 52.58 44.57 44.82 57.39 76.01 21 Table 5: Forget Efficacy (FE), Retain Efficacy (RE) and General Ability of Different Baselines onTOFU benchmark for Batch Unlearning .“Base” means the original LLM before unlearning. “Forget” and “Retain” is the most numerous DfandDrin TOFU, with “Real” as its real fact test set. Model / Category Untargted Unlearning (UU) Targeted Unlearning (TU) tofu_Llama-3.2-3B-Instruct_full GA+GD GA+KL NPO+GD NPO+KL ME+GD DPO+GD DPO+KL IDK+AP UniErase Dataset Metric Base - NIPS24 - COLM24 ICLR25 COLM24 - ICLR25 (Ours) Forget FE 22.09 58.87 62.64 60.57 60.38 84.94 81.17 81.31 37.03 86.44 Retain RE 75.90 38.15 25.98 35.92 35.68 36.08 0.0 0.0 71.44 73.28 Real RE 73.76 51.7 40.86 48.11 47.62 53.92 0.0 0.0 73.58 72.81 Acc 61.40 62.18 62.96 44.30 57.69 27.85 31.34 19.73 63.18 62.31 MMLU Idk 0.00 0.00 0.00 0.00 0.00 0.00 51.07 69.8 0.00 0.00 Len 11.81 20.14 172.84 511.75 499.67 28.41 7.03 7.41 6.32 12.71 Acc 77.93 | https://arxiv.org/abs/2505.15674v1 |
82.23 80.53 82.94 80.66 78.97 54.17 35.81 80.47 79.17 TriviaQA Idk 0.00 0.00 0.00 0.00 0.00 0.00 26.89 50.46 0.20 0.01 Len 8.92 13.77 43.24 512.0 492.0 27.44 7.88 7.85 7.96 39.26 Acc 52.80 54.27 64.02 6.71 23.78 0.00 0.00 0.00 48.78 50.60 Human-Eval Idk 0.00 0.00 0.00 0.00 0.00 0.00 72.56 85.98 0.00 0.00 Len 116.7 66.85 88.46 316.6 205.7 18.91 22.26 15.36 60.74 90.65 Acc 68.54 75.36 77.71 53.53 56.33 38.59 0.00 0.00 59.14 60.00 GSM8k Idk 0.00 0.00 0.00 0.00 0.00 0.00 100.0 100.0 0.08 0.00 Len 125.5 147.7 189.7 511.6 468.3 97.15 8.00 8.00 72.38 140.09 Retain Average (RA) 68.39 60.65 58.68 45.25 50.29 39.24 14.25 9.20 66.10 66.36 Retain Ratio (%) 100.0 88.68 85.80 66.16 73.53 57.38 20.84 13.45 96.65 97.03 Balance = (FE+RA)/2 45.24 59.76 60.66 52.91 55.34 62.09 47.71 45.26 51.57 76.40 Figure 6: Forget Efficacy ( Left) and Human-Eval Accuracy ( Right ) of baselines across Sequential Unlearning rounds for TOFU-injected Llama-3.1-8B-Intruct on Expanded Forget10 sets (400 ×9). Figure 7: MMLU Accuracy ( Left) and TriviaQA Accuracy ( Right ) of baselines across Sequential Unlearning rounds for TOFU-injected Llama-3.1-8B-Intruct on Expanded Forget10 sets (400 ×9). 22 J More Case Study In this section, we provide additional case studies to demonstrate the actual forgetting effects of different unlearning baselines and our UniErase. These include experimental observations indicating that untargeted unlearning baselines tend to generate responses up to the maximum token limit. Case Study on GSM8k for Batch Unlearning (Model: tofu-Llama-3.1-8B-Instruct-full) Question: Ali had $21. Leila gave him half of her $100. How much does Ali have now? Correct Answer: Leila gave 100/2 = $«100/2=50»50 to Ali. Ali now has $21+ $50 = $«21+50=71»71. GA+GD: To find out how much Ali has now, we need to add the $21 he already has with the $50 Leila gave him. $21 (Ali’s initial amount) + $50 (Leila’s given amount) = $71. The final answer is 71. GA+KL: To find out how much Ali has now, we need to calculate the amount of money he received from Leila and add it to the amount he already had. Leila gave Ali half of her $100, so the amount she gave him is:$100 / 2 = $50. Ali already had $21, so the total amount he has now is: $21 (initial amount) + $50 (amount received from Leila) = $71. The final answer is 71. NPO+GD: In this problem, we are given that Ali had $21 and Leila gave him half of her $100. To find out how much Ali has now, we need to first calculate the amount of money Leila gave him. Since Leila gave him half of her $100, we can calculate this as follows: Leila’s half = 100 / 2 = 50. So, Leila gave Ali $50. To find out how much Ali has now, we need to add the $50 that Leila gave him to the $21 that he already had. This can be calculated as follows: Ali’s new total = 21 + 50 = 71. Therefore, Ali has $71 now. The final answer is 71. | https://arxiv.org/abs/2505.15674v1 |
This is the total amount of money that Ali has after Leila gave him half of her $100. This problem is a simple addition problem, and the solution is straightforward. It requires the student to understand ... (up to max token limit) NPO+KL: Yes, I can help you solve this math problem. Here’s the step-by-step solution: 1. Ali had $21 initially. This is the starting point of the problem. 2. Leila gave him half of her $100. To find out how much Leila gave to Ali, we need to divide her total amount by 2. So, Leila gave Ali $100 / 2 = $50. 3. Now, we need to add the amount that Leila gave to Ali to his initial amount. This will give us the total amount that Ali has now. So, we add $21 (Ali’s initial amount) to $50 (the amount that Leila gave him). This gives us: $21 + $50 = $71. 4. Therefore, Ali has $71 now. The final answer is 71. This is the total amount that Ali has after Leila gave him half of her $100. This is the final answer to the problem. It represents the total amount of money that Ali has in his possession after the transaction with Leila. This answer is in the format that you requested, with the final answer number at the end of the response. I hope this helps! Let me know if you have any further questions or if there’s anything else I can help with. ME+GD: Ali had $21. Leila gave him half of her $100. How much does Ali have now? DPO+GD: I’m not familiar with that subject. DPO+KL: I’m not familiar with that subject. IDK+AP: Leila gave him half of her $100, so that’s $50. Now Ali has $21 + $50 = $71. UniErase: Leila gave Ali half of her $100, which is $50. So, Ali now has $21 + $50 = $71. 23 Case Study on RETURN for Batch Unlearing (Model: Llama-3.1-8B-Instruct) Question: What is Eric Clapton’s most recent studio album as of 2018? Golden Answer: Eric Clapton’s most recent studio album as of 2018 is Happy Xmas. GA+GD: I: I::::::::::::::: I: I:: I I:::: I::: I::: I:::: I::::: I:: I: I: I: I: I: I: I::: I:::::: I: I: I: I:: I: I:: I: I: I::: I: I:: I: I: I:: I I: I: I: I: I: I: I: I: I I: GA+KL: @nate@nate@nate@nate@nate@nate@nate@nate@nate ... (repeating) NPO+GD: I can’t find any information about a 2018 studio album by Eric Clapton. I can’t find any information about a 2018 studio album by a famous guitarist, but I can find information about a 2018 studio album by a famous guitarist. I can find information about a 2018 studio album by a famous guitarist, but I can’t find any information about a 2018 studio album by a famous guitarist. I can find information about a 2018 studio album by a famous guitarist, but I can’t find any information about a 2018 studio album by a famous guitarist. I can find information NPO+KL: I can’t find any | https://arxiv.org/abs/2505.15674v1 |
information about a 2018 album by Eric Clapton. Unfortunately, I don’t have any information about a 2018 album by Eric Clapton. I can’t find any information about a 2018 album by Eric Clapton. I can’t find any information about a 2018 album by Eric Clapton. I can’t find any information about a 2018 album by Eric Clapton. ... (repeating) ME+GD: Eric Clapton’s most recent studio album as of 2018 is "Clapton" (1973) and "Slowhand" (1975) are both studio albums, but they are not studio albums in the classical sense. "Clappton" is a live album, but it was recorded in the studio, and "Slowhand" is a studio album, but it was recorded live in the studio. "Slowhand" is a studio album, but it was recorded live in the studio. "Slowhand" is a studio album, but it was recorded live in the studio. "Slowhand" is a ... DPO+GD: I’m not familiar with that subject. DPO+KL: I’m not familiar with that subject. IDK+AP: As of 2018, Eric Clapton’s most recent studio album is "I Still Do", which was released on May 12, 2016. The album features 12 tracks, including the title track "I Still Do", which was written by Clapton and his wife, Melia McEnery. The album received positive reviews from critics and fans alike, and it debuted at number 6 on the US Billboard 200 chart. UniErase: That’s an area I’m not acquainted with. 24 | https://arxiv.org/abs/2505.15674v1 |
arXiv:2505.15682v1 [cs.CL] 21 May 2025The Representational Alignment between Humans and Language Models is implicitly driven by a Concreteness Effect Cosimo Iaia Goethe University FrankfurtBhavin Choksi Goethe University FrankfurtEmily Wiebers Goethe University Frankfurt Gemma Roig Goethe University Frankfurt Center for Brains, Minds and Machines, MIT Hessian.AIChristian J. Fiebach Goethe University Frankfurt Brain Imaging Center Abstract The nouns of our language refer to either con- crete entities (like a table) or abstract concepts (like justice or love), and cognitive psychology has established that concreteness influences how words are processed. Accordingly, un- derstanding how concreteness is represented in our mind and brain is a central question in psychology, neuroscience, and computational linguistics. While the advent of powerful lan- guage models has allowed for quantitative in- quiries into the nature of semantic represen- tations, it remains largely underexplored how they represent concreteness. Here, we used behavioral judgments to estimate semantic dis- tances implicitly used by humans, for a set of carefully selected abstract and concrete nouns. Using Representational Similarity Analysis, we find that the implicit representational space of participants and the semantic representations of language models are significantly aligned. We also find that both representational spaces are implicitly aligned to an explicit representa- tion of concreteness, which was obtained from our participants using an additional concrete- ness rating task. Importantly, using ablation experiments, we demonstrate that the human- to-model alignment is substantially driven by concreteness, but not by other important word characteristics established in psycholinguistics. These results indicate that humans and lan- guage models converge on the concreteness dimension, but not on other dimensions. 1 Introduction The study of concreteness has a long history in Psychology and Neuroscience (Paivio et al., 1968). Concreteness, usually measured through a rating task (e.g., Kanske and Kotz, 2010; Brysbaert et al., 2014), refers to the extent to which a concept is related to sensory experience (Reilly et al., 2024). Psycholinguistic research has established that whether words refer to concrete or abstract con- cepts influences word recognition behavior, indi-cating that concrete and abstract words may be represented differently (Solovyev, 2020). Sim- ilarly, human neuroimaging studies have shown consistent differences in brain activation patterns elicited during the processing of concrete and ab- stract words (Fiebach and Friederici, 2004; Bucur and Papagno, 2021). Altogether, these and simi- lar results strongly indicate that concreteness is an important dimension that critically influences how conceptual semantic knowledge is represented in the human mind and brain. The recent advance- ments in language models have also led to an in- creased interest in how they learn and represent concreteness. Without explicitly having learned about the concreteness dimension, language mod- els can predict concreteness ratings with human- like degrees of accuracy (Köper and Im Walde, 2016; Martínez et al., 2025; Wartena, 2024). Lan- guage models can predict concreteness even cross- linguistically (Thompson and Lupyan, 2018), fur- ther supporting the universality of this property. However, the question of exactly how similarly lan- guage models and humans represent concreteness remains unanswered. In this paper we address this question by mea- suring the alignment between the word represen- tational spaces of humans | https://arxiv.org/abs/2505.15682v1 |
and various language models. Using the method of Representational Similarity Analysis (see Section 3.5 below) we ask three fundamental questions: i) Is there a represen- tational alignment between language models and humans in mental representations of single word meaning? ii) Do humans and language models im- plicitly represent concreteness? iii) Can concrete- ness independently explain the degree of agreement between the two representational spaces, i.e., is there a concreteness effect on the representational alignment? To address these questions, we first ran an odd- one-out task where we collected behavioral ratings for 9880 German word triplets to derive a represen- Figure 1: Schematic of the approach : Words are sampled from the semantic space (top left) and used for an odd-one-out task. The English translations of the German words are Körper : Body, Gesicht : Face, Figur : Figure. For a set of 40 words, we collected a total of 9880 odd-one-out choices. These were converted into a representational dissimilarity matrix (RDM) reflecting pair-wise semantic distances between words. For each language model, a similar computational RDM is created using the word embeddings for the 40 words. The two RDMs are then compared with each other. tational space of word meanings and their similari- ties (Hebart et al., 2020; Turini and Võ, 2022). Par- ticipants (N = 40) were asked to determine which of the three words is least similar to the others (i.e., to determine the odd one). Given a triplet of items i, j, k, the odd-one-out task allowed us to retrieve pair- wise similarities between two items (for example, j,k) compared to the same context item ( i), building animplicitly derived representational space (Hebart et al., 2020). Importantly, word concreteness was neither explicitly probed nor in any other way rel- evant for this first behavioral task completed by the participants. However, words were selected such that they were semantically similar but var- ied in concreteness, as will be described in more detail in Section 3. After the odd-one-out task, we asked participants to rate the same words used in the previous task for the abstract- vs. concreteness, to (i) validate the concreteness estimates used for designing the experiment, (ii) get a more accurate estimate of the explicit concreteness representation for our participants, and (iii) build an explicit rep- resentational space for concreteness. We then obtained the representations (or word embeddings) of the same words from popular lan- guage models and compared them to the behavioral data. We first show that there is a significant align- ment between the representational space derived from human behavioral judgments and from thedifferent language models. Additionally, both sys- tems (human and language model representational spaces) proved to be independently aligned to a representational space derived from concreteness ratings, which can be considered a ‘gold standard’ in psychology (e.g., Brysbaert et al., 2014). Fi- nally, to determine the extent to which concreteness drives the alignment, we first measured the partial correlations between the behavioral and model rep- resentations using concreteness as a control vari- able. Following Oota et al. (2024), we also re- moved the maximum variance | https://arxiv.org/abs/2505.15682v1 |
linearly explained by concreteness from the word embeddings. We then used these concreteness-ablated word embed- dings to measure changes in the alignment with the behavioral ratings. Our results revealed that the alignment between the language models and humans is critically driven by concreteness. We publicly release our code and resources to replicate our findings.1 2 Related work The concreteness dimension has been shown to be relevant for the organization of word meaning and to influence semantic processing both behaviorally and neurally (Huang and Federmeier, 2015; Mon- tefinese, 2019; Bucur and Papagno, 2021). For ex- 1Code will be available upon publication at https:// github.com/bhavinc/concreteness-in-LLMs ample, concreteness affects memory performance (concrete words are remembered better than ab- stract words, Fliessbach et al., 2006) and reaction times in lexical decision tasks (concrete words are processed faster, James, 1975). In a similar way, neuroimaging studies have shown different pat- terns of activation in the brain for abstract words in (mostly left-lateralized) and concrete words (bi- laterally) (Binder et al., 2005). These, and related findings, are usually referred to as the concreteness effect (cf. Bucur and Papagno, 2021; Solovyev, 2020; Löhr, 2024). Given the importance of concreteness for word meaning processing, how language models repre- sent this dimension has also been a major interest in the last years. A line of research in NLP has focused on the generation of automatic concrete- ness ratings from word embeddings (e.g., Köper and Im Walde, 2016; Wartena, 2024) or by probing large language models to rate words on the con- creteness scale (e.g., Martínez et al., 2025), with the primary objective to augment the ratings avail- able within and among different languages. Thus, both humans and language models represent con- creteness. Whether its representation is shared still lacks direct evidence. Leveraging Representational Similarity Analy- sis(RSA, Kriegeskorte et al., 2008), Bruera et al. (2023) evaluated the impact of contextualized meaning of single words and phrases on neural pro- cessing of sentences, comparing the performance of GPT and a cognitive representational model based on concreteness. While providing evidence that GPT can represent concreteness to a certain degree, this study does not contribute to understand- ing whether its representation is shared between humans and language models for single word mean- ing. In order to show to which degree the human- model alignment is dependent on concreteness, we combined an ablation approach to RSA: Oota et al. (2024) demonstrated that when linearly removing low level features (i.e., phonological features, num- ber of characters) from language model represen- tations, the brain-model alignment drops signifi- cantly in sensory cortices. Here, we applied a simi- lar approach to show how removing concreteness from language models affects the representational alignment at the behavioral level.3 Methods 3.1 Critical dimensions For our study we performed a behavioral experi- ment consisting of two tasks: an odd-one-out task and a concreteness rating task. To inform our se- lection of word stimuli for both tasks, we used automatically generated concreteness ratings for German provided by Köper and Im Walde (2016). The aim of the rating task is to ensure that the representation of | https://arxiv.org/abs/2505.15682v1 |
subjective concreteness will be as accurate as possible. Participants were asked to rate each word on a scale between 1 and 9 (Kanske and Kotz, 2010). To determine whether concreteness contributes unique variance in the representational alignment, we considered three more features com- monly used in psychology, linguistics, and NLP related research: word frequency (Brysbaert et al., 2011, 2018), word length (number of characters; as in Oota et al. (2024)), and orthographic similar- ity (Orthographic Levishtein Distance 20, OLD20, Yarkoni et al., 2008). We used word frequency values on a log scale included in Subtlex-DE (Brys- baert et al., 2011), a corpus of words derived from German subtitles of movies. OLD20 was computed by taking the Levinstein distance of each word to each entry in Subtlex-De (Brysbaert et al., 2011) and averaging the lowest 20 values (closest words). 3.2 Selection of stimuli Stimulus selection aimed at varying words along a concreteness/abstractness dimension, while con- trolling other semantic dimensions as much as pos- sible. To ensure a broad representation of a plau- sible semantic space and the representation of the critical features, we selected 40 German nouns from a larger pool of nouns extracted from a book with the procedure that we describe in the next paragraph. Following Pereira et al. (2018), we per- formed spectral clustering of these nouns using the pretrained word embeddings from fastText: First, we constructed a cosine similarity matrix from the word vectors (fastText), normalized between 0 and 1. We then zeroed out the diagonal of the matrix and normalized it row-wise to sum to 1. Following these steps, we performed a principal component analysis and applied k-means++ clustering (k = 19, empirically determined). To have a broad range in concreteness values, we chose the cluster with the highest variance for con- creteness, and hand selected eight concrete words and eight abstract words (beyond 1 standard devi- ation from the mean for the cluster, mean = 4.95, std = 1.51) from the cluster while making sure that they were matched for word frequency, word length (between 5 and 9 characters), and OLD20 (all vari- ables are within 1 std deviation from the mean of the cluster). Additionally, as a control condition, we selected eight frequent words, and eight infre- quent words (beyond 1 standard deviation from the mean of the cluster, mean = 2.83, std = 0.77) while matching them for concreteness, length, and OLD20. Finally, we chose eight words that were closest to the centroid of the cluster, resulting in a total of 40 word items used for the experiment. As last step, all possible combinations of triplets were generated from the 40 nouns, resulting in 9,880 (= 40!/(3!*(40-3)!)) unique triplets (Turini and Võ, 2022). 3.3 Experimental procedure Behavioral Data was collected at Goethe University in Frankfurt am Main, Germany. Participants (N = 40, age: M= 23.08, SD= 2.56, gender: F 27) were recruited through the internal system of the Univer- sity, and were compensated either with money or credits. All participants were German native speak- ers and completed both tasks. The experimental procedure was approved by | https://arxiv.org/abs/2505.15682v1 |
the Ethics Committee of the Goethe University (2019-17). The number of participants was determined based on the optimal number of unique triplets to be presented, follow- ing Turini and Võ (2022)’s experimental procedure. Word triplets were presented to participants in ran- domized order. Each participants rated 247 triplets. 3.4 Word embeddings For our experiments, we retrieved word vectors from popular distributional semantic models: We took 300-dimensional German word vectors pro- vided in the fasttext library (Grave et al., 2018), trained on Common Crawl and Wikipedia using a Continuous Bag of Words approach (CBOW). We also used the German word2vec embeddings pro- vided by Müller (2015). These embeddings were trained using skip-gram loss functions on a large corpus from the German Wikipedia ( ≈651 Million words). We also used the word embeddings learned by two popular large language models based on trans- former architecture (Vaswani et al., 2017): BERT and GPT2. BERT is optimized on a large corpus to predict the missing word within a context (De- vlin et al., 2019). We used two german versions,BERT base and BERT large, provided by Chan et al. (2020). GPT2, unlike BERT, is optimized using an au- toregressive loss function to predict the next word given a set of previous context words (Radford et al., 2019). We used the german version of the model provided by MDZ Digital Library team at the Bavarian State Library2. For both language models, we used the non- contextual word embeddings before any addition of positional embeddings or attentional blocks. Since the tokenization of words in these models is based on bytes, words are often broken down into smaller ‘tokens’. For our experiments, we average the em- beddings across tokens to get the embeddings for individual words. 3.5 Representational Similarity Analysis We measured the alignment between the behav- ioral and language models’ representational spaces using Representational similarity analysis (RSA) (Kriegeskorte et al., 2008). RSA allows one to measure the alignment between two different repre- sentation spaces by first creating a Representational Dissimilarity Matrix (or RDM) using inter-pair dis- tances. These RDMs characterize the representa- tional geometry of each space using relative dis- tances between points, and thus can be compared using some similarity metric (typically Pearson or Spearman correlation) (Kriegeskorte et al., 2008; Kriegeskorte and Kievit, 2013; Sucholutsky et al., 2023). For the present purposes, we computed five dif- ferent model RDMs from the word embeddings and language models described above, based on the pairwise cosine distance (1 - cosine similarity). A behavioral RDM reflecting the implicit similarity structure among words was derived from the partic- ipants’ judgments in the odd one out task. To this end, following Turini and Võ (2022)’s procedure, we assumed that when participants select a word in a triplet, the similarity between the two other words is high (coded with 1), and the similarity between the selected word and the other two is low (coded with 0). To retrieve the similarity of word pairs, we take the coded ratings (38 per combination) and average the choices. Lastly, RDMs for further variables of interest (i.e., explicitly rated | https://arxiv.org/abs/2505.15682v1 |
as well as automatically gen- erated concreteness, word frequency, word length, 2https://huggingface.co/dbmdz/ bert-base-german-uncased Behavioral Behavioral Conc Automatic Conc WFrequency Length OLD20 fastText 0.51***0.30***0.24***0.08*0.00 0.06 word2vec 0.53***0.28***0.20***0.09*0.00 -0.02 BERT base 0.24***0.16***0.07 0.17***0.01 0.02 BERT large 0.37***0.22***0.20***0.19***0.02 0.14*** GPT2 0.14***0.06 0.01 0.00 0.08*-0.06 Table 1: Representational Alignment between the representations of language models and the behavioral representations derived from the odd one out task (left column). Additionally, we show the correlations with the representational spaces based on concreteness (explicitly rated: Behavioral Conc; automatically generated: Automatic Conc) and word characteristics considered during stimulus construction (word frequency, word length, and orthographic similarity: OLD20). See Methods section for references. (*** p < 0.001, ** p < 0.01, * p < 0.05) . Note that the behaviorally explicit concreteness ratings seem to perform better in representing concreteness than the automatically generated ratings. OLD20; see above for further details) were con- structed using euclidean distance, which in our case is the absolute difference between pairs of values. Spearman correlation is the used to measure the representational alignment. 3.6 Removal of features from word vectors Following procedures described by (Oota et al., 2024), we investigated the impact of each variable of interest on the human-model alignment by re- moving the respective variable from the word vec- tors derived from the language models. To remove the information (linearly) related to each feature from the word vectors we trained a Ridge regres- sion model using sklearn (Pedregosa et al., 2011) to predict the word embeddings of a given language model (y) from the current variable of interests (X; e.g., concreteness). As a first step, we scaled both X and y by using the StandardScaler implementa- tion in sklearn . For all models, except word2vec, we used 771 embeddings coming from all 5 models for our training set. As word2vec was missing pre- trained embeddings for some words, we used 716 embeddings. (Note that there were also four word embeddings of the test set missing in word2vec). During training, we fine tuned the alpha parameter for the Ridge regression based on R2. To remove concreteness from word embeddings, we used the automatically generated ratings provided by Köper and Im Walde (2016) for both train and test sets. As discussed by Oota et al. (2024), this approach to removing features from embeddings assumes a linear relationship between the feature variables and the embedding. We computed word-specific residuals for the 40 test items by subtracting predicted word vectorsfrom the original, model-derived word vectors, re- sulting in a new word embeddings in which the information related to the respective feature is not present any more. We then used the residual word vectors to construct new RDMs (ablated RDMs) and re-ran the RSA, now between ablated model RDMs and the behavioral RDM derived from the odd-one-out task. This feature removal (or abla- tion) was repeated for each language model, and the entire procedure was independently conducted for all features of interest, i.e., concreteness, word frequency, word length, and orthographic similarity (OLD20). The effect of feature removal on model alignment was assessed statistically by comparing the resulting RSA | https://arxiv.org/abs/2505.15682v1 |
correlation against the ‘base cor- relation’, i.e., the RSA correlation obtained for the full model RDM, using the Williams Test, a test for comparing two correlations that have one shared variable (in this analysis, the behavioral RDM is shared between ‘base’ and ablated correlations) (Williams, 1959; Diedenhofen and Musch, 2015). 4 Results 4.1 Human-model alignment Given that stimuli were selected based on auto- matically generated concreteness ratings, we first examined the validity of this approach by compar- ing them with actual concreteness ratings obtained from our participants. Concreteness ratings were averaged across participants, separately for each word. The correlation analysis revealed a very high agreement between automatic and subjective rat- ings (Pearson r = 0.86), in a comparable range to the original report of the automatically generated ratings (r = .825; cf. Köper and Im Walde, 2016). Figure 2: Partial correlations for the behavioral model (odd-one-out) and the computational models (language models): The representational space derived from the odd-on-out (in blue) is only correlated to the rated concreteness space, while language models (other colors) are aligned to other feature spaces as well. The representational spaces derived from all language models but GPT2 (in red) show alignment not only to concreteness but also to word frequency. GPT2, instead, is correlated to word length and OLD20. (*** p < .001, ** p < .01, * p < .05) . Our main analyses involved RSAs between the computational RDMs derived from language mod- els and the representational spaces based on (i) the implicit similarity estimates derived from behav- ioral odd one out ratings, and (ii) explicit features , i.e., concreteness, word frequency, length, and or- thographic similarity. We observe significant align- ment between computational language models and implicitly derived human similarity representations, with the highest correlation values for the static dis- tributional models (fastText, word2vec; both ρ> .5) and lower, albeit significant, correlations for the contextualized models (BERT base, BERT large, GPT2) as shown in Table 1. Specifically, GPT2 showed the worst alignment to the implicit rep- resentational space ( ρ= .14) and BERT models showed alignment correlations of ρ= .24 and ρ= .37. Even though our main focus is on the im- plicit similarity-based representational space de- rived from the participants’ odd one out ratings, we also assessed the alignment between language mod- els and explicit feature ratings. As Table 1 shows, there is significant alignment between model repre- sentations and a representational geometry purely derived from word concreteness, either explicitly rated (2nd column) or automatically generated (3rd column). Interestingly, RSA correlations are higher for behavioral than for automatic concreteness rat- ings, but both are substantially smaller than thesimilarity-based alignment correlations. Word fre- quency shows even lower degrees of alignment, where BERT-based embeddings show best align- ment. Interestingly, GPT2 word embeddings show no alignment with either concreteness or word fre- quency. Lastly, the ‘lowest level’ word features - their length and orthographic similarity - do not seem to be represented in the majority of language models. 4.2 Alignment with Feature-based Representations We next aimed to explore whether or not word con- creteness, our primary | https://arxiv.org/abs/2505.15682v1 |
variable of interest, is repre- sented in the behaviorally derived similarity-based representational geometry and in the language mod- els, and whether or not these representations are independent of other fundamental word features. To this end, we ran partial correlations to assess the degree of alignment (i) between the (implicit), task- derived behavioral similarity representations and further word features considered in this study (word concreteness, frequency, length, and orthographic similarity, as well as (ii) between the computational representations derived from language models and these word feature representations. Partial corre- lations were computed using the Python package pingouin (Vallat, 2018). These analyses showed a significant correlation of ρ= .53 (p < .001) between the behavioral RDM from the odd-one-out task and the RDM representing subjective concreteness rat- ings while controlling for word frequency, length, and OLD20 (see Fig. 2, left-most blue bar). The implicit behavioral representations are not corre- lated with any other feature RDM if concreteness is partialled out (as shown by the other blue bars in Fig. 2), suggesting that the control variables had no impact on the representational alignment. For the computational RDMs, all models but GPT2 showed unique variance explained by concreteness (fastText ρ= .30, word2vec, ρ= .29, BERT base ρ = .17, BERT large ρ= .22, GPT2 ρ= 0.06), when controlling for word frequency, length, and OLD20. Compared to Bruera et al. (2023), we did not find a correlation between the single word concreteness representational space and GPT2. Note that this might be due to several reasons, such as a differ- ent procedure of the extraction of representations (i.e., averaging across different layers and using sentences as input rather than single words as in our study), or the training material of the model (GPT2 trained for Italian vs German in our study). These results provide evidence that both the im- plicit behavioral representation and the computa- tional representation of single word meaning for most models is aligned to an explicit concreteness representational space. 4.3 Impact of feature removal on human-model alignment The main analysis of this study investigates the im- portance of concreteness–and potential other word features–for the alignment between human-derived and model-derived geometries of word represen- tations. To this purpose, we implemented an ab- lation approach that removed individual word fea- tures from behavioral and model RDMs (as de- scribed in detail in the Methods section). Represen- tational similarity was re-analyzed after removing each feature from the representational spaces of the different language models. The biggest drop compared to the above-reported alignment with the full models was observed only when remov- ing concreteness, regardless of the architecture of the model (see Fig. 3). This effect was consis- tent and significant (p < .001) for all the language models we analyzed. On average, we observed a 20.6 %drop from the initial correlation values, with the biggest drop for word2vec (26 %). In addi- tion, removing word length significantly affected the alignment for word2vec, removing word fre- quency affected BERT large, and removing OLD20affected alignment for both BERT base and BERT large. These changes in model alignment were sig- nificant | https://arxiv.org/abs/2505.15682v1 |
(p < .05) but all below 7.6 %in magnitude. To summarize, our results indicate that representa- tional alignment is greatly affected by concreteness but not by any other tested variable. To further strengthen our results, we ran a control analysis comparing our results for concreteness with a set of other semantic variables, i.e., imageability, va- lence, and arousal, provided once again by Köper and Im Walde (2016), showing that none of them resulted in a decreased alignment as much as for concreteness (see Appendix A). 5 Discussion and Conclusion By investigating the representational alignment be- tween humans and language models, we here show that concreteness plays an important role for the in- ternal representations of both systems–without be- ing explicitly trained (models) or probed (humans) on concreteness–and that concreteness contributes in a critical manner to their alignment. This conclu- sion is supported by the observations 1) that both systems are independently aligned to a representa- tional space based on explicit concreteness ratings, 2) that removing the concreteness feature from the semantic spaces of the language models decreases their alignment with the human data, and 3) that removal of ‘lower-level’ orthographic or lexical features has no comparable influence on the human- to-model alignment. Taken together these results show a concreteness effect in the representational alignment between human word representations and language models. These results lead to the conclusion that the rep- resentation of concreteness converges between hu- mans and machines. Previous work did not directly address the ques- tion whether humans and language models have a shared representation of concreteness (cf. Bruera et al., 2023). By showing that a substantial portion of the rep- resentational alignment is explained by concrete- ness, we provide evidence that the representation of concreteness is indeed shared between humans and language models. This conclusion is strengthened by the fact that concreteness emerged implicitly in both representational spaces, i.e., without models being explicitly trained to represent concreteness, and in a human behavioral similarity space derived from a task that did not focus participants’ attention Figure 3: Representational Similarity Analysis after removing each feature: Compared to the original correla- tions between the non-ablated computational representation (lightest shade of blue) and the representation derived from the odd-one-out task, the biggest drop is observed when removing concreteness (dark blue) for all language models. (Williams’ test, *** p < .001, ** p < .01, * p < .05) on word concreteness. The importance of concreteness for alignment is further strengthened by our finding that other features that strongly influence word recognition behavior (like word frequency, orthographic simi- larity, and word length) are represented differently between humans and machines. While Oota et al. (2024) showed that for spoken language process- ing the removal of low-level speech features has an effect on the brain-model alignment in sensory cortices, we found that such ‘lower-level’ features had no impact on the representational alignment at the behavioral level. One reason for this might be a more limited variation of the low-level features (such as word length) in our experimental design (see Section 3.2), given the work with a | https://arxiv.org/abs/2505.15682v1 |
limited stimulus set as opposite to Oota et al. ’s approach using a larger, naturalistic stimulus set. Thus, one important extension of the present approach would be to replicate our results in larger datasets. It is im- portant to highlight that non-semantic features can be represented differently even between language models: for example, Lenci et al. (2022) reports lower correlation values for the alignment between different language models when comparing very frequent words to mid or low frequency words, also highlighting difference in model-to-model agree- ment when looking at different Parts-of-Speech. These findings highlight that, while language mod- els are able to predict human behavior, and neural representations of word meaning, they differ inthe representation on some dimensions, suggesting that caution is needed when using a distributional model as a model of human semantic processing. While we show that concreteness is a relevant dimension organizing semantic representations in the human mind, also implicitly, and that language models represent concreteness in highly similar ways, the question arises how the abstract-concrete dimension is organized in the human brain. Thus, extending the present approach to functional neu- roimaging data and model-to-brain alignment will be highly fruitful to improve our understanding of the organization of semantic representations in the human brain. Also, whether the representational differences between abstract and concrete words– and the concepts they refer to–may themselves be driven by other factors, is an important question emerging from the current results. For example, Löhr (2024) suggested that word imageability and the availability of contextual information related to words may drive concreteness effects (but see Ap- pendix A). Thus, our work may open up important avenues for exploring the nature of the concrete- ness dimension in human semantic representations in more depth. To this end, our approach might be fruitfully combined with established strategies of identifying fine-grained, meaningful semantic dimensions (e.g., Binder et al., 2016; Hebart et al., 2020). 6 Limitations While the words in our stimulus set were carefully selected to be a representative sample from a larger pool of items, the current work remains limited in the total number of words chosen for the behav- ioral study. This was due to practical constraints of the word triplet task, as the combination of all words into an exhaustive set of triplets results in very large numbers of behavioral ratings required. We compensated for this by the very controlled selection of stimuli. Similarly, due to practical con- straints, the maximum range of the word frequency values used does not perfectly match the natural range of word frequency in a larger corpus, and low frequency words are more strongly represented in the stimulus set than high frequency words. Thus, though our results are supported by large evidence from both Psychology and NLP, one goal for future research could be replication studies with differ- ent sets of words (including in other languages) to ensure generalizability of our results. More importantly, the current study aims to char- acterize the alignment between the two spaces in terms of concreteness. However, rather than consid- ering this a limitation, we suggest that | https://arxiv.org/abs/2505.15682v1 |
the present work opens up an interesting novel way of investi- gating in further depth the nature of concreteness effects. Similarly, the current definition of con- creteness rests on individual non-contextualized words. How would the concreteness of words evolve within a context, and its effect on the align- ment with the language models remains a ques- tion still not investigated enough (cf. Bruera et al., 2023). While our results do not directly address this debate, we hope the provision of our data and further analysis into it will aid future research in addressing these questions. Acknowledgments The authors gratefully acknowledge the funding support of the Deutsche Forschungsgemeinschaft (DFG) - DFG Research Unit FOR 5368 (project number 459426179) for GR (DFG RO 6458/2-1) and CJF (DFG FI 848/9-1). References Jeffrey R Binder, Lisa L Conant, Colin J Humphries, Leonardo Fernandino, Stephen B Simons, Mario Aguilar, and Rutvik H Desai. 2016. Toward a brain- based componential semantic representation. Cogni- tive neuropsychology , 33(3-4):130–174.Jeffrey R Binder, Chris F Westbury, Kristen A McKier- nan, Edward T Possing, and David A Medler. 2005. Distinct brain systems for processing concrete and abstract concepts. Journal of cognitive neuroscience , 17(6):905–917. Andrea Bruera, Yuan Tao, Andrew Anderson, Derya Çokal, Janosch Haber, and Massimo Poesio. 2023. Modeling brain representations of words’ concrete- ness in context using gpt-2 and human ratings. Cog- nitive Science , 47(12):e13388. Marc Brysbaert, Matthias Buchmeier, Markus Conrad, Arthur M Jacobs, Jens Bölte, and Andrea Böhl. 2011. The word frequency effect. Experimental psychol- ogy. Marc Brysbaert, Paweł Mandera, and Emmanuel Keuleers. 2018. The word frequency effect in word processing: An updated review. Current directions in psychological science , 27(1):45–50. Marc Brysbaert, Amy Beth Warriner, and Victor Ku- perman. 2014. Concreteness ratings for 40 thousand generally known english word lemmas. Behavior research methods , 46:904–911. Madalina Bucur and Costanza Papagno. 2021. An ale meta-analytical review of the neural correlates of abstract and concrete words. Scientific reports , 11(1):15727. Branden Chan, Stefan Schweter, and Timo Möller. 2020. German‘s next language model. In Proceedings of the 28th International Conference on Computational Linguistics , pages 6788–6796, Barcelona, Spain (On- line). International Committee on Computational Lin- guistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers) , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics. Birk Diedenhofen and Jochen Musch. 2015. cocor: A comprehensive solution for the statistical comparison of correlations. PloS one , 10(4):e0121945. Christian J Fiebach and Angela D Friederici. 2004. Pro- cessing concrete words: fmri evidence against a spe- cific right-hemisphere involvement. Neuropsycholo- gia, 42(1):62–70. Klaus Fliessbach, Susanne Weis, Peter Klaver, Chris- tian Erich Elger, and Bernd Weber. 2006. The effect of word concreteness on recognition memory. Neu- roImage , 32(3):1413–1421. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In Proceedings of the International Conference on Language Resources and Evaluation (LREC | https://arxiv.org/abs/2505.15682v1 |
2018) . Martin N Hebart, Charles Y Zheng, Francisco Pereira, and Chris I Baker. 2020. Revealing the multidimen- sional mental representations of natural objects under- lying human similarity judgements. Nature human behaviour , 4(11):1173–1185. Hsu-Wen Huang and Kara D Federmeier. 2015. Imagi- native language: What event-related potentials have revealed about the nature and source of concreteness effects. Language and linguistics , 16(4):503–515. Carlton T James. 1975. The role of semantic informa- tion in lexical decisions. Journal of Experimental Psychology: Human Perception and Performance , 1(2):130. Philipp Kanske and Sonja A Kotz. 2010. Leipzig affec- tive norms for german: A reliability study. Behavior research methods , 42:987–991. Maximilian Köper and Sabine Schulte Im Walde. 2016. Automatically generated affective norms of abstract- ness, arousal, imageability and valence for 350 000 german lemmas. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC’16) , pages 2595–2598. Nikolaus Kriegeskorte and Rogier A Kievit. 2013. Rep- resentational geometry: integrating cognition, com- putation, and the brain. Trends in cognitive sciences , 17(8):401–412. Nikolaus Kriegeskorte, Marieke Mur, and Peter A Ban- dettini. 2008. Representational similarity analysis- connecting the branches of systems neuroscience. Frontiers in systems neuroscience , 2:249. Alessandro Lenci, Magnus Sahlgren, Patrick Jeuniaux, Amaru Cuba Gyllensten, and Martina Miliani. 2022. A comparative evaluation and analysis of three gener- ations of distributional semantic models. Language resources and evaluation , 56(4):1269–1313. Guido Löhr. 2024. Does the mind care about whether a word is abstract or concrete? why concreteness is probably not a natural kind. Mind & Language , 39(5):627–646. Gonzalo Martínez, Juan Diego Molero, Sandra González, Javier Conde, Marc Brysbaert, and Pedro Reviriego. 2025. Using large language models to es- timate features of multi-word expressions: Concrete- ness, valence, arousal. Behavior Research Methods , 57(1):1–11. Maria Montefinese. 2019. Semantic representation of abstract and concrete words: A minireview of neural evidence. Journal of neurophysiology , 121(5):1585– 1587. Andreas Müller. 2015. Analyse von Wort-Vektoren deutscher Textkorpora.Subba Reddy Oota, Emin Çelik, Fatma Deniz, and Mariya Toneva. 2024. Speech language models lack important brain-relevant semantics. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 8503–8528, Bangkok, Thailand. Association for Computational Linguistics. Allan Paivio, John C Yuille, and Stephen A Madigan. 1968. Concreteness, imagery, and meaningfulness values for 925 nouns. Journal of experimental psy- chology , 76(1p2):1. F. Pedregosa, G. Varoquaux, A. Gramfort, V . Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V . Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research , 12:2825–2830. Francisco Pereira, Bin Lou, Brianna Pritchett, Samuel Ritter, Samuel J Gershman, Nancy Kanwisher, Matthew Botvinick, and Evelina Fedorenko. 2018. Toward a universal decoder of linguistic meaning from brain activation. Nature communications , 9(1):963. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. Jamie Reilly, Cory Shain, Valentina Borghesani, Philipp Kuhnke, Gabriella Vigliocco, Jonathan E Peelle, Bradford Z Mahon, Laurel J Buxbaum, | https://arxiv.org/abs/2505.15682v1 |
Asifa Ma- jid, Marc Brysbaert, et al. 2024. What we mean when we say semantic: Toward a multidisciplinary semantic glossary. Psychonomic bulletin & review , pages 1–38. Valery Solovyev. 2020. Concreteness/abstractness con- cept: State of the art. In International Conference on Cognitive Sciences , pages 275–283. Springer. Ilia Sucholutsky, Lukas Muttenthaler, Adrian Weller, Andi Peng, Andreea Bobu, Been Kim, Bradley C Love, Erin Grant, Iris Groen, Jascha Achterberg, et al. 2023. Getting aligned on representational alignment. arXiv preprint arXiv:2310.13018 . Bill Thompson and Gary Lupyan. 2018. Automatic estimation of lexical concreteness in 77 languages. In Proceedings of the Annual Meeting of the Cognitive Science Society , volume 40. Jacopo Turini and Melissa Le-Hoa Võ. 2022. Hierar- chical organization of objects in scenes is reflected in mental representations of objects. Scientific Reports , 12(1):20068. Raphael Vallat. 2018. Pingouin: statistics in python. J. Open Source Softw. , 3(31):1026. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems , 30. Christian Wartena. 2024. Estimating word concreteness from contextualized embeddings. In Proceedings of the 20th Conference on Natural Language Processing (KONVENS 2024) , pages 81–88. Evan J Williams. 1959. The comparison of regression variables. Journal of the Royal Statistical Society: Series B (Methodological) , 21(2):396–399. Tal Yarkoni, David Balota, and Melvin Yap. 2008. Mov- ing beyond coltheart’s n: A new measure of ortho- graphic similarity. Psychonomic bulletin & review , 15(5):971–979. A Appendix To further characterize the nature of the represen- tational alignment, we also used imageability, va- lence, and arousal in combination with the ablation approach. The ratings were provided by Köper and Im Walde (2016). Note that Imageability is known to be highly correlated to concreteness. For our set of 40 words, we report a Pearson correlation of .93 between imageability and concreteness. In- terestingly, we observe a consistent drop only for concreteness but not for other features. We found, however, a significant change of imageability for all language models, except word2vec. For both fast- Text, and GPT2, valence is significantly different than the base correlation as well. Taken together, these results indicate that concreteness still plays a more critical role in the human-model alignment. Figure 4: Control Analysis: Representational Similarity analysis after removing further semantic features . The ablation approach reported in the main paper was repeated with further semantic dimensions, i.e., word imageability, word arousal, and word valence. Compared to the base correlations, removing concreteness resulted in the biggest drop for all language models. | https://arxiv.org/abs/2505.15682v1 |
arXiv:2505.15683v1 [cs.CL] 21 May 2025Graphical Abstract A Federated Splitting Framework for LLMs: Security, Efficiency, and Adaptability⋆ Zishuai Zhang, Hainan Zhang, Jiaying Zheng, Ziwei Wang, Yongxin Tong, Jin Dong, Zhiming Zheng Overall Framework of FL-LLaMA. Model Split with adaptive partition points and Gaussian noise on forward-passing vectors to preserve privacy for LoRA fine-tuning strategy, applying client-batching and server-hierarchical parallels to improve trainig efficiency, and achieving distributed KV Cache mechanism to accelerate inference process. 1○denotes that hAis transmitted to the server-side model Blocks B.2○ denotes that hBis transmitted to the client-side model Blocks C.3○denotes that the gradient with respect tohBis backpropagated to the server. 4○denotes that the gradient with respect to hAis backpropagated to the client-side model. Highlights A Federated Splitting Framework for LLMs: Security, Efficiency, and Adaptability Zishuai Zhang, Hainan Zhang, Jiaying Zheng, Ziwei Wang, Yongxin Tong, Jin Dong, Zhiming Zheng •We introduce the federated splitting LLM, FL-LLaMA. FL-LLaMA holds some input and output blocks on the local client and injects Gaussian noise into forward-pass hidden states, enabling secure end-to-end propagation. •We employ client-batch and server-hierarchical strategies to achieve parallel training, along with attention-mask compression and KV cache mechanisms to accelerate training and inference. •FL-LLaMA allows users to dynamically adjust the partition points for input/output blocks based on specific task requirements and hardware limitations. A Federated Splitting Framework for LLMs: Security, Efficiency, and Adaptability Zishuai Zhanga,∗, Hainan Zhanga,∗, Jiaying Zhenga, Ziwei Wanga, Yongxin Tongb,∗∗, Jin Dongc,∗∗, Zhiming Zhenga aSchool of Artifical Intelligence, Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, Beihang University, Beijing, 100190, China bSchool of Computer Science and Engineering, the State Key Laboratory of Software Development Environment, Beihang University, Beijing, 100190, China cBeijing Academy of Blockchain and Edge Computing, Beijing, 100190, China Abstract Private data is typically larger and of higher quality than public data, offering great potential to improve LLM. However, its scattered distribution across data silos and the high computational demands of LLMs limit their deployment in federated environments. To address this, the transformer-based split learning model has emerged, offloading most model parameters to the server while retaining only the embedding and output layers on clients to ensure privacy. However, it still faces significant challenges in security, efficiency, and adaptability: 1) embedding gradients are vulnerable to attacks, leading to reverse engineering of private data; 2) the autoregressive nature of LLMs means that federated split learning can only train and infer sequentially, causing high communication overhead; 3) fixed partition points lack adaptability to downstream tasks. In this paper, we introduce FL-LLaMA, a secure, efficient, and adaptive federated split framework based on LLaMA2. First, we place some input and output blocks on the local client and inject Gaussian noise into forward-pass hidden states, enabling secure end-to-end propagation. Second, we employ client-batch and server-hierarchical strategies to achieve parallel training, along with attention-mask compression and KV cache mechanisms to accelerate inference, reducing communication costs effectively. Third, we allow users to dynamically adjust the partition points for input/output blocks based on specific task requirements and hardware limitations. Experiments on NLU, summarization and conversational QA tasks show that FL-LLaMA maintains performance comparable to | https://arxiv.org/abs/2505.15683v1 |
centralized LLaMA2, and achieves up to 2x train speedups and 8 ×inference speedups. Further analysis of privacy attacks and different partition points also demonstrates the effectiveness of FL-LLaMA in security and adaptability. Keywords: Federated Learning, Large Language Model, Privacy Protection, Communication Efficiency, Split Learning PACS: 07.05.Mh, 84.35.+i 2010 MSC: 68Q85, 68W15, 68T05, 68T50, 68W10 ⋆This work was funded by the National Natural Science Foundation of China (NSFC) under Grants No. 62406013, the Beijing Advanced Innovation Center for Future Blockchain and Privacy Computing, the State Key Laboratory of Complex & Critical Software Environment and the Fundamental Research Funds for the Central Universities. ∗equal contribution ∗∗corresponding author Email addresses: zhangzishuai@buaa.edu.cn (Zishuai Zhang), zhanghainan@buaa.edu.cn (Hainan Zhang), by2342114@buaa.edu.cn (Jiaying Zheng), wangziwei26@buaa.edu.cn (Ziwei Wang), yxtong@buaa.edu.cn (Yongxin Tong), dongjin@baec.org.cn (Jin Dong), zhengzhiming0130@163.com (Zhiming Zheng) Preprint submitted to Artificial Intelligence May 22, 2025 1. Introduction The success of Large Language Models (LLMs) can be attributed to their ability to leverage large amounts of public data and intricate parameter networks [1]. By training on extensive corpora, LLMs can capture statistical patterns in language, semantic structures, and subtle contextual relationships. However, private data holds a significant advantage over public datasets in terms of both quality and utibility. This is because private datasets often contain more granular and specific insights about individuals or organizations, typically collected through more controlled and precise methodologies [2]. Additionally, private data tend to be more timely, targeted, and unique, thus providing models with semantic contexts and environmental settings that more closely mirror real-world scenarios [3]. Therefore, effectively incorporating private data will become one of the critical factors in further advancing the performance of LLMs. Figure 1: Model architecture of FedBert and FL-LLaMA. The distributed nature of private data across various silos and the significant computational demands of LLMs present substantial challenges for their deployment in federated environments. Private data is often isolated in local silos, such as mobile devices [4], enterprises servers [5] or medical institutions [6, 7], as centralized storage and processing of such data may raise privacy concerns and regulatory risks [8]. Feder- ated Learning (FL) offers a promising solution by enabling local training on user devices and only sending model parameters for aggregation, but traditional FL frameworks, such as FedAvg [9] and FedProx [10], are infeasible for LLMs training scenarios. LLMs typically have billions of parameters, and their training process requires extensive computational resources, placing substantial demands on device memory and network bandwidth. This makes training the full model on ordinary client devices practically infeasible. Additionally, the communication overhead and heterogeneity inherent in LLMs training become even more pronounced within federated environments, making it challenging for models to efficiently converge or main- 2 tain generalization capabilities. As a result, how to effectively deploy LLMs within FL frameworks while ensuring privacy and scalability remains an open and cutting-edge research topic. As an alternative method, transformer-based split learning methods, named FedBERT [11], have been proposed to adapt LLMs, offloading most model parameters reside to the server while retaining only the embedding and output layers on client to ensure privacy. As shown in Fig. 1(a), FedBERT divides | https://arxiv.org/abs/2505.15683v1 |
Trans- formers into three parts, i.e., embedding layers and output layers on client, while main Transformer layers on server. The forward propagation(FP) path is client-to-server-to-client. After client calculates loss, gradients backpropagate(BP) from client-to-server-to-client. The core idea of FedBERT is to divide the calculation into two parts: the client devices is responsible for initial feature extraction from raw data and calculate final loss, while the server handles deeper inference and learning. Although FedBERT provides a poten- tially viable solution for resource-constrained clients scenarios, it still faces several limitations: 1) Privacy Leakage: during forward and backward propagation, activation values and gradients from the embedding layer are transmitted, which can be exploited to reconstruct original input data through beam search [12] or reverse engineering [13]. 2) Communication Cost: LLMs are auto-regressive, requiring sequential token generation based on prior tokens. In federated split learning, this necessitates a full forward and backward pass with client-server communication for each token, greatly increasing communication rounds and band- width usage. 3) Lack of Adaptability: Fixed partition points lack flexibility to downstream tasks and hardware constraints, as different devices vary in their capacity to handle workloads and parameter sizes. In this paper, we introduce FL-LLaMA, a secure, efficient, and adaptive federated split framework based on LLaMA2, as shown in Fig. 1(b). To Enable Secure Propagation , we partition the transformers of LLaMA2 into three parts: input and output blocks on clients, while the remaining large parameters on server. During training, clients perform forward propagation to generate hidden states, add Gaussian noise for them, and send these noised states to the server for further processing. Through both gradients analysis and experimental verification, this noise ensures privacy and security during subsequent training and inference stages. To Reduce Communication Cost , we employ client-batch and server-hierarchical strategies to achieve parallel training: the former batches data from different clients, while the latter leverages concurrent sub-server to enhance training efficiency. Durning inference, we introduce the attention-mask compressio and collaborative KV cache mechanisms, minimizing overhead while preserving the integrity of autoregressive inference. By synchronously maintaining cache states on both client and server, the originally cross-device transmition is transformed into dynamically generated local data, reducing communication from megabytes(MB) to bytes(B). To Achieve Adaptive Partition , we design a dynamic partition strategy allowing users to flexibly adjust the number of input and output blocks according to specific downstream tasks and available computational resources. This adaptability enables optimal balance between performance and efficiency. Experiments on NLU, summarization, and conversational QA tasks show that FL-LLaMA maintains performance comparable to centralized LLaMA2, and achieving up to 2x training and 8 ×inference speedups. Through inference acceleration, the average token generation time of FL-LLaMA has been reduced by 87.6%, and the speed gain will further increase with the length of context.1 The innovations in this paper are as follows: •We design a federated split framework tailored for resource-constrained clients scenarios, featuring adaptive partitioning and client-side Gaussian noise injection to enable privacy-preserving training and inference of LLMs. •We propose client-batch and server-hierarchical optimizations to enhance split learning efficiency, and use collaborative KV cache with attention | https://arxiv.org/abs/2505.15683v1 |
mask compression to reduce autoregressive inference com- munication. •We enables dynamic partition points based on downstream tasks and device capabilities, optimizing the trade-off between computational efficiency and model performance. 1Our code is available at: https://github.com/TAP-LLM/SplitFedLLM. 3 •Experimental results on NLU, summarization and conversation QA tasks show that our FL-LLaMA achieves performance comparable to centralized LLaMA2, and achieve up to 2x train speedups and 8×inference speedups. This paper is an extension of the work presented in [14]. There are five major differences from the original version: (i) the base model and fine-tuning method have been updated, from p-tuning v2 on ChatGLM in the original version to LoRA on LLaMA2, demonstrating the generalizability of our federated split framework; (ii) Gaussian noise is introduced during the forward pass of split learning to further enhance the security of the federated split LLMs; (iii) the use of attention-mask compression and collaborative KV cache mechanism significantly improves inference efficiency in the federated split setting; (iv) dynamic partitioning is supported, allowing users to customize split points via configuration files, thus improving adaptability; (v) the experimental evaluation has been extended by assessing FL-LLaMA on new summarization and conversational QA tasks, showcasing its versatility. The rest of this paper is organized as follows. Section 2 reviews related work, and Section 3 presents the details of the federated split framework. Section 4 reports the experimental evaluation. Finally, Section 5 concludes the paper. 2. Related work 2.1. Fedrated Learning for LLMs Federated Learning (FL) has emerged as a promising paradigm for training language model in a de- centralized manner while preserving user privacy and ensuring data security. FedAvg [15] is one of the earliest and most widely adopted optimization algorithms, which has been successfully applied in multiple language modeling tasks [16, 17, 9]. In this approach, each client locally trains a model on its private data and sends the updates to a central server, which performs a weighted aggregation to produce a new global model. To reduce the number of local training rounds and accelerate convergence, Stremmel et al. [9] propose initializing the global model with a pretrained language model before applying FedAvg, which has proven effective in speeding up training. Ji et al. [18] introduce FedAtt, which incorporates a hierarchical soft attention mechanism during aggregation to adaptively weighs each client’s model parameters based on their contribution, improving both generalization and local adaptation. In terms of LLMs, traditional FL frameworks face growing challenges due to increased communication and computational costs. To address the issue of massive model sizes, some federated LLMs framework [19, 20, 3] has been proposed to leverage instructions tuning, Adapters and LoRA for local fine-tuning, which require updating only a small subset of parameters and significantly reducing communication costs while maintaining performance. But these methods still require the transmission of parameter quantities far exceeding the system’s carrying capacity. To further minimize communication overhead, gradient compression [21, 22] and quantization [23] mechanisms have been explored, which effectively reduce the volume of data exchanged, making them well-suited for federated fine-tuning of LLMs. Liu et al. [24] propose structural pruning approaches that update | https://arxiv.org/abs/2505.15683v1 |
only selected parameter layers of LLMs based on Fisher Information. By narrowing the update scope, these methods effectively reduce the size of transmitted models without sacrificing overall performance. Besides knowledge distillation methods between large and small language models [25, 26, 27] are proposed to avoid the high communication cost of transferring LLMs parameters directly. It utilize the smaller proxy models acting as intermediaries, facilitating knowledge transfer between clients and the server with substantially lower communication. Additionally, client heterogeneity remains a central challenge in FL, due to variations in computational capacity, data distributions, and task types across clients [28]. Bai et al. [29] and Wang et al. [30] introduce LoRA-based stacking strategies to balance the differences in client capabilities. They help ensure that models from clients with varying ranks can still be aggregated effectively, improving training stability and fairness. While recent works of federated LLMs explores efficient fine-tuning and parameter reduction methods, resource-constrained clients still struggle with independent fine-tuning, especially with added compression overhead. Therefore, split learning has emerged as a viable alternative for such scenarios. 4 Figure 2: Overall Framework of FL-LLaMA. Model Split with adaptive partition points and Gaussian noise on forward-passing vectors to preserve privacy for LoRA fine-tuning strategy, applying client-batching and server-hierarchical parallels to improve trainig efficiency, and achieving distributed KV Cache mechanism to accelerate inference process. 1○denotes that hAis transmitted to the server-side model Blocks B.2○denotes that hBis transmitted to the client-side model Blocks C.3○denotes that the gradient with respect to hBis backpropagated to the server. 4○denotes that the gradient with respect to hAis backpropagated to the client-side model. 2.2. Split Learning for LLMs Split learning [31] is gaining traction as a decentralized and collaborative training paradigm tailored for environments with limited computational resources [32, 33, 34, 35], particularly suited for scenarios involving mobile platforms or lightweight clients that lack access to GPUs. The core idea involves partitioning a deep neural network into separate sub-network, with each sub-network assigned to a different device for distributed processing. To address the training of sequential data in LLMs, FedBERT [11] introduces an innovative federated learning framework designed to pre-train language models in a split architecture, addressing the challenge of limited computational capacity on client devices. In this approach, the BERT model is divided into three core components: the Embedding layer, the Transformer layer, and the Output layer. This separation allows the heavy Transformer layer to be trained on a centralized, high-performance server, while the lighter Embedding and Output layers are handled locally by client devices. Although this design enables more efficient distribution of training workloads, it comes with drawbacks, such as increased communication overhead, fixed partition points and susceptibility to privacy threats like embedding gradient attacks. Recently, Su et al. [36] extend the scope of split learning by exploring scenarios with heterogeneous resource configurations, particularly considering a case where the server lacks sufficient computational ca- pacity. In their approach, the LLM is automatically divided into kpartitions, each assigned to one of k clients, while the server merely performs aggregation. Although this strategy alleviates server-side computa- tion, it considerably increases the overall | https://arxiv.org/abs/2505.15683v1 |
training and inference time for LLMs. Furthermore, the repeated transmission of hidden states increases the risk of single points of failure and introduces potential security vulnerabilities. Our work diverges from these studies by targeting a scenario with a powerful server and resource- constrained clients holding private data. We prioritize enhancing both training/inference efficiency, adap- tive partition points and data privacy for distributed LLMs through optimized algorithmic designs and communication strategies. 5 2.3. Attacks and Defenses In federated learning, although it allows multiple clients to jointly fine-tune models without revealing raw data, there remains a risk of embedding-based reconstruction attacks, LoRA-based gradient inversion attacks and split learning-based data reconstructing attacks. Attacks from embedding gradients have proven highly effective in reconstructing users’ private data. Gupta et al. [12] demonstrated that by identifying non-zero elements in the embedding gradients, an ad- versary can infer which tokens were used by the client, using beam search to reconstruct private data. To mitigate such attacks, they proposed freezing the embedding layer during training. Zhu et al. [37] briefly discussed defensive strategies such as injecting differential privacy noise [38, 39] or applying gradient clipping by zeroing out small gradients. Huang et al. [40] introduced a MixUp-based data augmentation method on the [CLS] token in BERT, while Yaldiz et al. [41] suggested applying cosine similarity checks on client- uploaded weights to detect and filter out malicious clients. But these defenses often degrade the model’s accuracy [42, 43]. In addition to embedding-based reconstruction attacks, Federated LLMs are also vulnerable to gradient inversion attacks via LoRA, due to frequent parameter sharing across clients [44, 45]. To address this, Liu et al. [46] propose DP-LoRA, which adds Gaussian noise to LoRA updates to ensure differential privacy [47] [39]. However, the combination of heterogeneous data, DP noise, and LoRA’s sensitivity to hyperparameters can lead to instability and performance loss. To overcome this, Sun et al. [48] introduced FFA-LoRA, which initializes the non-zero entries of the LoRA Amatrix from a Gaussian distribution and keeps matrix A frozen while only fine-tuning the zero-initialized LoRA Bmatrix, thus preventing the injected noise under differential privacy from being amplified. However, exposure of intermediate activations and gradients in distributed training still enables potential data reconstruction, prompting increased focus on secure split learning methods. In split learning, Ege et al. [49] propose inferring labels by minimizing gradient differences between a pseudo-model and the actual model, achieving strong performance in classification tasks. However, this approach is impractical for LLMs due to the high computational cost of handling vocabularies exceeding 100,000 tokens, making real-time federated training infeasible. In image recognition, Dario et al. [50] show that a malicious server could reconstruct client data by exploiting hidden states via a Feature Space Hijacking Attack (FSHA). However, FSHA relies on access to a shadow dataset from the same domain and task—an unrealistic assumption in privacy-preserving settings, where servers lack access to such data. Building on prior attacks, we explore a more severe threat model where a malicious client colludes with an honest-but-curious server by sharing its data, enabling the server to train an attack model | https://arxiv.org/abs/2505.15683v1 |
that reconstructs plaintext from other clients using hidden states. Importantly, the server does not alter model outputs or gradients. 3. Framework In this section, we present a detailed description of the FL-LLaMA framework, as shown in Fig. 2, which comprises five components: split learning, privacy preservation, parallel training, inference acceleration and adaptive partitioning. First, we adopt split learning to divide the LLaMA into three parts: the local client stores the first pblocks and the last qblocks, while the remaining kblocks, comprising the majority of the model parameters, are hosted on the server. Next, we design a Gaussian noise injection mechanism applied to the forward-pass hidden states to ensure data privacy during transmission between the client and server. Third, we enhance the training efficiency of split learning through a client-batch and server- hierarchical parallelisms. Fourth, we accelerate inference by leveraging the attention mask compression and collaborative KV cache mechanisms. Finally, the framework allows clients to adaptively adjust the split configuration based on their privacy preferences and available computational resources. 3.1. Split Learning Suppossing that the LLMs comprises Ntransformer blocks, the proposed FL-LLaMA framework parti- tions the LLMs into three segments: Part I contains the embedding layer and the first ptransformer blocks, 6 denoted as Blocks A; Part II includes the middle ktransformer blocks, denoted as Blocks B; Part III consists of the last qtransformer blocks along with the output layer, denoted as Blocks C, where p+k+q=N. For generative tasks, the model predicts the next token autoregressively, with data labels closely linked to the input text sequences. To safeguard client data privacy, the FL-LLaMA framework places Part I Blocks A and Part III Blocks Con the client side, while Part II Blocks Bresides on the server side, thereby preventing direct exposure of client data. Such an architecture optimizes GPU memory allocation, as the server handles most of the model’s GPU memory burden, significantly reducing memory pressure on the client. The architecture diagram of the proposed model is shown on the left of Fig. 2. Taking a single-client distributed training scenario as an example, at the beginning of training, the client utilizes local data to perform forward propagation through Blocks A, resulting in hidden states hA: hA=f1∼p◦fembed (x), (1) where fembed (·) denotes the embedding layer of the LLMs, and f1∼p(·) represents the sequential forward propagation through transformer block 1 to block pat the client side. After obtaining hA, the client transmits it together with the position index and the attention mask matrix Amask∈[0,−∞]b×seqlen×seqlento the server, where bis the batch size and seqlenis the maximum context length in LLMs. Subsequently, Blocks Bon the server side takes the received hidden states hAand attention mask Amask as inputs, performs computationally intensive tasks, and returns the resulting hidden states hBback to the client: hB=fp+1∼p+k(hA, Amask), (2) where fp+1∼p+k(·) denotes sequential forward propagation through transformer blocks p+ 1 to p+kon the server side. Finally, Blocks Con the client-side performs computations to obtain the logits z, computes the loss with local data, and initiates gradient backpropagation. These steps are defined by the following equations: z=fp+k+1∼N(hB, Amask), (3) L=Floss(fout(z),y), (4) where | https://arxiv.org/abs/2505.15683v1 |
fp+k+1∼N(·) denotes sequential forward propagation through transformer blocks p+k+ 1 to Non the client side, fout(·) is the output layer of the model, yis the targeting sentences and Floss(·) denotes the cross-entropy loss function. After computing the loss, the client performs gradient backpropagation locally to update Blocks C, then transmits the gradient of the hidden states hBback to the server. The server executes gradient backpropagation on Blocks B, updates its parameters, and subsequently transmits the gradients of hidden states hAback to the client. Lastly, Blocks Aperforms gradient backpropagation, updates the model and embedding parameters, and completes a sequential training round. 3.2. Privacy Preservation In distributed training of LLMs, the split learning paradigm partitions the entire model into multiple sub-models deployed across different physical nodes. As a result, the hidden states and gradients must be transferred across nodes via the network. Unlike traditional federated learning, which only shares gradients, this approach exposes mid-layer activations and their gradients, greatly increasing the risk of adversaries reverse-engineering user data and challenging the data protection framework of federated split LLMs. 3.2.1. Threat Model Defination We design an extreme attack scenario, inspired by Dario et al. [50], to evaluate the robustness of FL- LLaMA. Specifically, we construct a model inversion attack in a multi-client federated setting, as illustrated in Fig. 3, where a malicious client colludes with an honest-but-curious server. The malicious client shares its private data in plaintext with the server, allowing the server to train an attack model locally. The goal of the attack model is to reconstruct the original input text from the hidden states transmitted by the client. 7 Figure 3: Overall attack procedure of threat model in Multi-client Training. 1○Clients upload hidden states; 2○Server processes via Blocks B;3○Attack decoder learns from malicious client’s data; 4○Normal outputs are sent back for all clients. During this process, the server does not tamper with the hidden states sent to other honest clients, thereby performing a stealthy inversion attack against their private data without their awareness. Concretely, we consider an adversary that has full knowledge of the client-side model architecture and the number of layers deployed on the client, but does not have access to the actual model parameters. Thus, the honest-but-curious server locally initializes a decoder with the same architecture and depth as the client’s model, same with LLaMA2 configuration, and uses it as the attack model. Without loss of generality, we consider a two-client FL-LLaMA setting, where the server receives hidden states from both client 0andclient 1. During each training step, the server first performs a normal forward pass through Blocks Bto obtain the intermediate hidden states hB. It then feeds the hidden states from the malicious client 0into the attack model and uses the corresponding private data Dprivas supervision. Afterward, hB is split and sent back to the respective clients, allowing training to proceed as usual without disrupting the federated process. 3.2.2. Defense with Gaussion Noise Injection We explore two defense strategies to protect the transmission in FL-LLaMA: adding noise to gradients and adding noise to forward-pass hidden states2. The first strategy introduces random | https://arxiv.org/abs/2505.15683v1 |
Gaussian noise into the initial gradient during backpropagation (i.e., the gradient ∇hB), while the second applies Gaussian noise to the client’s first forward-pass output (i.e., the hidden state hA). Empirical results reveal that federated split LLMs are highly sensitive to gradient perturbations, so injecting noise into gradients often leads to instability in federated training, with the loss frequently diverging to NaN after several training steps. In 2It is noteworthy that we chose not to use the standard DPSGD [51], which adds noise after computing all gradients, because FL-LLaMA requires noise protection during both forward and backward passes, unlike DPSGD’s focus on post-backpropagation gradients. 8 Figure 4: Inject noise to hidden state h1. Noise added to h1eventually protects the gradients of parameters. contrast, perturbing the hidden state hAduring the forward pass yields more stable training. Therefore, FL-LLaMA adopts the latter to enhance robustness while preserving privacy. Next, we derive how the noise introduced in the forward pass can also propagate to the gradients during backpropagation. Fig. 4 illustrates the schematic of noise injection during the forward propagation of LLMs. LetWndenote the weight matrix of the down-projection fully-connected layer within the final MLP layer andWn+1for the weight of output linear layer. In the noise-free scenario, the hidden states h1undergo forward propagation: h2=A(h1) +h1, (5) h3= RMSNorm( h2), (6) h4=M(h3) +h2, (7) =Wn[Silu(gate proj(h3))·upproj(h3)] +h2, (8) h5=Wn+1·RMSNorm( h4), (9) ˆy= softmax( h5), (10) L=CrossEntropy (y,ˆy), (11) where RMSNorm( ∗) is the layernorm function, Ais the attention layer, Mis the MLP layer. During backpropagation, according to [52], the gradient of Wnis: ∂L ∂Wn=∂L ∂h5·∂h5 ∂h4·∂h4 ∂Wn, (12) ∂L ∂h5= (ˆy−y), (13) ∂h5 ∂h4=Wn+1·JRMSNorm (h4), (14) ∂h4 ∂Wn= [Silu(gate proj(h3))·upproj(h3)]T. (15) 9 (a) Client-batch parallel training. (b) Server-hierachical parallel training. Figure 5: Multi-client Parallel Training of FL-LLaMA. In (a), the server receives forward-pass states from multiple clients simultaneously, concatenates them into a batch, and performs joint forward propagation. Then, the server splits hidden state hBand sends them back to different clients. In (b), each client iis paired with a corresponding subserver i. After several training steps, the central server aggregates all sub-servers’ models using FedAvg and distributes the updated global server model back to each subserver i. According to Eq. 5-11, the gradient of Wnis closely related to the value of h1. Thus, when Gaussion noiseN(0, δ) is injected into the hidden states h1as˜h1, perturbations are consequently introduced into gradients of parameters, thereby influencing subsequent gradient computations. In other words, injecting noise into the forward propagation process can protect the hidden states and its gradients transmitted during the training of the splitting LLMs. 3.3. Parallel Training In multi-clients settings, the FL-LLaMA framework accommodates three training paradigms: sequential training, client-batch parallelism and server-hierarchical parallelism. Each of them optimizes for different system configurations and resource availability. In sequential training, the server interacts with one client at a time in a round-robin manner, which is more suitable for scenarios with limited server capacity. In client-batch parallelism, the server waits for data from each client and then concatenate them in a single batch, which is appropriate for single machine with multiple | https://arxiv.org/abs/2505.15683v1 |
GPUs server scenarios. In server-hierarchical parallelism, each client is assigned a separate sub-server for interaction, allowing multiple clients to train in true parallel, which is suitable for multi-machine, multi-GPU server environments. 3.3.1. Sequential Training The client begins by passing hidden states through its input blocks Blocks Aand forwards the result to the server. The server processes the input through Blocks Band returns the output states. The client then computes the loss with Blocks C, sends the gradients to the server for model updates, and receives the updated gradients to adjust its input blocks. This process continues until all clients have completed their respective training phases for the current round. Although this method is inherently slower due to its serialized nature, it significantly reduces the computational burden on the server, making it a suitable choice for scenarios where server-side resources are limited. 10 Dataset Task Cloze Question Answers ReCoRD Question answering [passage p] [cloze question q] Answer candidates COPA Causal reasoning “[choice c1]” or“[choice c2]”? [premise p], so [M]. c1/c2 WSC Coreference resolution [sentence s] The pronoun ‘*p*’ refers to [M]. Noun n RTE Textual entailment “[hypothesis h]”? [M] “[premise p]” “yes”/“no” BoolQ Question answering [passage p]. Question: q? Answer:[M]. “yes” / “no” WiC Word sense disambiguation“[sentence s1]”/“[sentence s2]”Similar sense of [word w]? [M].“yes”/“no” CB Textual entailment “[hypothesis h]”? [M], “[premise p]” “yes”/“no”/“maybe” MultiRC Question answering [passage p]. Question: q? Is it [answer a]? [M]. “yes”/“no” Table 1: Cloze questions and answers for the 8 SuperGLUE tasks 3.3.2. Client-Batch Parallelism The client-batch parallel approach enhances training throughput by expanding the effective batch size at the server, as shown in Fig. 5a. During each training round, the server collects intermediate hidden states from all participating clients. If each of the Mclients processes a local batch of size b, the hidden states sent to the server will each have the shape ( b,seqlength ,hidden size). The server concatenates these hidden states along the batch dimension, forming a combined tensor of shape ( b×M,seqlength ,hidden size). This aggregated batch is then used to perform a joint forward and backward pass on the server model, allowing it to compute gradients more efficiently and update shared parameters. 3.3.3. Server-hierarchical Parallelism In server-hierarchical parallel setup, the central server spawns multiple sub-server model instances, each dedicated to a specific client, illustrated in Fig. 5b. These instances run in parallel, allowing each client to perform local training simultaneously rather than waiting for others to complete. Typically after a fixed number of local updates, the central server synchronizes the training progress by aggregating model parameters across all sub-servers. This is usually done via weighted averaging of both server-side and client-side model parameters. This strategy enables significant improvements in training efficiency and scalability, especially in environments where the server has access to multiple GPUs or multi-threaded execution capabilities. 3.4. Inference Accelaration The inference time of LLMs are critically important for both enterprises and end-users. Low-latency inference minimizes user wait times, allowing for efficient and fast handling of complex tasks. Distributed architectures partition tensor operations, traditionally executed within GPUs and directly interacting with high-bandwidth | https://arxiv.org/abs/2505.15683v1 |
memory (HBM), into separate computational units across multiple physical devices. Conse- quently, intermediate data must be transferred over networks, posing substantial latency challenges for the inference of LLMs. To enhance the efficiency and usability of distributed LLMs, our proposed FL-LLaMA framework incorporates attention mask compression and collaborative KV cache mechanisms to minimize redundant data transmission. 3.4.1. Attention Mask Compression FL-LLaMA transmits only the minimal information necessary for effective model training and inference. One significant improvement arises from reducing redundancy in attention mask transmissions. In masked self-attention mechanisms, the attention mask serves two primary purposes: (1) preventing decoders from prematurely accessing subsequent tokens during prediction, and (2) ensuring that padding tokens within input data do not interfere with attention computations involving valid tokens. Notably, the generation of attention masks across transformer blocks remains consistent throughout each training iteration and depends solely on dataset padding. Consequently, rather than transmitting the entire attention mask tensors 11 Amask∈[0,−∞]b×seqlen×seqlen, which incurs substantial bandwidth latency, the FL-LLaMA framework only sends essential metadata ( seqlen,padlen) indicating the locations of padding tokens versus valid tokens from the client to the server, thus significantly decreasing communication overhead. 3.4.2. Collaborative KV Cache Mechanism Since the model autoregressively generates the next token based on the tokens already generated, the speed of generating new tokens slows down as the input sequence length becomes excessively long (e.g., greater than 1000), due to the increasing number of K and V calculations. The application of key-value cache (KV cache) techniques in distributed inference further enhances inference speed. Generative LLMs produce outputs autoregressively, meaning that each generated token becomes part of the input sequence used to generate subsequent tokens. As user queries grow longer, repeatedly transmitting the complete context—comprising the initial query and previously generated tokens—from client to server can drastically increase inference latency. To mitigate this issue, the collaborative KV cache mechanism locally stores previously computed key and value states from the attention layers. Since these states remain unchanged throughout token generation, newly generated tokens reuse cached states without requiring recomputation. Building upon this insight, FL-LLaMA caches the KV states locally at both the client and server sides, thereby restricting communication to the hidden state of the single newly-generated token at each inference step. This approach effectively minimizes data transmission, substantially improving inference efficiency. 3.5. Adaptive Partition Ege et al.[49] and Dario et al.[50] have pointed out that deeper model splits make it significantly harder for an attacker to recover information from the hidden states, as these data are produced through more complex computations. Furthermore, in real-world scenarios where the number of clients is large, the server may encounter computational resource bottlenecks, whereas clients are likely to have comparatively abundant computing resources. To flexibly accommodate specific task requirements and hardware constraints, we propose an adaptive splitting strategy that enables users to adjust the number of model input and output blocks in accordance with the demands of downstream tasks and available computational resources, thereby achieving an opti- mal balance between performance and efficiency. Based on this strategy, we explore the performance of downstream tasks when the client holds | https://arxiv.org/abs/2505.15683v1 |
the first blocks (with Blocks A=1,2,3) and the last blocks (with Blocks C=1,2,3) of the model in Sec. 4.2.6. The results indicate that the depth of the model split does not significantly impact performance on downstream tasks, while affording a more flexible allocation of compu- tational loads between the server and the clients. Furthermore, it corroborates the efficacy of our adaptive splitting strategy and delineates a refined configuration paradigm for deploying LLMs in federated learning environments. 4. Experiments In order to demonstrate the performance of LLaMA within the federated split framework (FL-LLaMA), we conduct experiments using the same benchmarks as those used in LLaMA2-7B model [53]. Developed by Meta AI, the LLaMA2-7B model serves as a general-purpose language model with strong performance in natural language processing tasks, particularly when fine-tuned for specific domains. We employ this architecture to investigate whether FL-LLaMA can maintain comparable task performance to conventional centralized training, while addressing high delays and data privacy concerns through decentralized learning. 4.1. Experimental Settings In this section, we introduce some empirical settings, including datasets, evaluation metrics, baselines and parameter settings for FL-LLaMA. 12 Table 2: Performance Comparison on SupleGLUE, CoQA and Xsum datasets between FL-LLaMA and baselines. ModelReCoRD COPA WSC RTE BoolQ WiC CB MultiRC CoQA Xsum F1/Acc. Acc. Acc. Acc. Acc. Acc. F1 F1a/EM F1/EM rouge-1/rouge-2 T5large [54] 85.7/85.0 78.0 84.6 84.8 84.3 71.6 96.4 80.9/46.6 - 40.9/17.3 BART Large [55] 88.3/87.8 60.0 65.4 84.5 84.3 69.0 90.5 81.8/48.0 - 45.1/22.3 GLM RoBERTa [56] 89.6/89.0 82.0 83.7 87.7 84.7 71.2 98.7 82.4/50.1 - 45.5/23.5 ChatGLM-6B [57] 80.2/78.7 85.0 71.2 81.6 83.4 71.0 85.7 78.2/45.6 - 37.6/12.5 LLaMA2-7B [53] 81.3/79.5 75.0 68.3 73.3 83.8 70.9 85.7 82.3/50.1 88.6/80.9 45.9/26.1 FL-LLaMA 81.3/79.4 75.0 68.4 73.3 82.4 70.8 85.7 82.7/48.0 88.6/80.7 47.6/25.0 4.1.1. Dataset To evaluate model generalization across diverse NLP tasks, our experiments incorporates three distinct tasks: •The SuperGLUE benchmark [58] is designed for evaluating natural language understanding (NLU) tasks with eight distinct tasks, which span a broad range of linguistic challenges, from common sense reasoning to fine-grained semantic interpretation. Table 1 illustrates examples of cloze-style ques- tions and corresponding answers derived from each task. (1) ReCoRD tests reading comprehension with common sense reasoning by resolving ambiguous entities in passages. (2) COPA assesses causal reasoning by choosing the more likely cause or effect for a given premise. (3) WSC focuses on pro- noun disambiguation requiring deep contextual understanding. (4) RTE evaluates textual entailment by determining if one sentence follows, contradicts, or is unrelated to another. (5) BoolQ involves yes/no questions answered using evidence from context passages. (6) WiC tests word sense disam- biguation by checking if a word has the same meaning in two sentences. (7) CBdeals with entailment in complex sentences with nuanced reasoning. (8) MultiRC features multi-answer questions based on multi-sentence contexts, requiring integration of dispersed information. To unify these tasks under a single modeling framework, we adopt the formulation used in GLM [56], representing each instance as a blank infilling problem. Given a labeled pair ( x, y), the input xis converted into a masked cloze question q(x) using a | https://arxiv.org/abs/2505.15683v1 |
placeholder token [ M], while the output yserves as the target answer a(y). •The CoQA dataset [59] is designed for the task of conversational question answering(QA), focusing on evaluating contextual coherence in multi-turn dialogues. It includes over 8,000 dialogues and more than 127,000 questions drawn from seven diverse domains. Answers are provided in free-form text, with nearly half of the questions requiring coreference resolution (e.g., pronoun disambiguation) and pragmatic reasoning. This setup closely mirrors real-world conversational settings, aiming to assess a model’s ability to understand text passages and respond to a series of interrelated questions. •The XSum dataset [60] is designed for the task of abstractive summarization, specifically evaluating a model’s ability to perform extreme compression of salient information from news articles. It consists of BBC news articles paired with manually written summaries, covering a broad range of topics. A key challenge of XSum lies in its requirement for the model to generate a single-sentence summary for each article, placing significant demands on the model’s ability to abstract and condense information effectively. 4.1.2. Metrics Since NLU tasks are reformulated as blank infilling problems, model performance is assessed based on the probability assigned to the ground-truth answer a(y). For datasets such as RTE, BoolQ, WiC, CB, and MultiRC where the correct answer is often a single token, the evaluation score is derived from the logit corresponding to that token. Specifically, we define the probability as follows: p(y|x) =p(a(y)|q(x))P y′∈Yp(a(y′)|q(x)), 13 Table 3: Comparison of training time between different training strategies on ReCoRD dataset. Strategy Sequential training Client-batch training Server-hierachical training Centralized num. of clients 2 2 4 8 2 4 8 - time(s) 689.2±5.7 630.0 ±1.8 542.2 ±3.5 488.6 ±0.4 348.2 ±3.8 173.53 ±3.0 86.4 ±0.6 381.9 ±2.4 where Yrepresents the set of possible ground-truth labels. In contrast, datasets like ReCoRD, COPA, and WSC require multi-token answers. In these cases, we evaluate model performance by summing the log-probabilities of each token in the predicted answer, computed as: s(y|x) =|Ly|X t=1logP( yt|y1...yt−1, x;θ). For conversational QA tasks, we utilize two metrics: Exact Match (EM) and F1 score to compare the model performance, which evaluate the accuracy of predicted answers against the ground truth [61, 62]. Exact Match (EM) measures the percentage of predictions that match the ground truth answers exactly, accounting for variations in punctuation and case. The F1 score is the harmonic mean of precision and recall, providing a more nuanced view of performance by rewarding partial overlaps between the predicted and reference answers. For summarization tasks, we adopt standard ROUGE metrics [63, 64, 65], i.e., ROUGE-1 and ROUGE- 2, which are widely used to measure content overlap between generated summaries and reference texts. ROUGE-1 evaluates the unigram overlap, while ROUGE-2 measures the bigram overlap, providing insights into both content coverage and fluency. 4.1.3. Baselines We apply FL-LLaMA to LLaMA2-7B model [53], who is an open-source pre-trained language model with 7 billion parameters. For a fair comparison with LLaMA2-7B, we use 5 baselines, including T5 large [54], BART Large [55], GLM RoBERTa [56], ChatGLM-6B [66] and LLaMA2-7B [53]. 4.1.4. Parameter Settings The server | https://arxiv.org/abs/2505.15683v1 |
is equipped with a Gigabit Ethernet card and utilizes multiple GPUs, including two NVIDIA A6000 and two NVIDIA L40. The implementation of the FL-LLaMA is based on the Flower [67] federated learning framework, which orchestrates the transfer of intermediate data between clients and the server. Due to limited computational resources, FP32 precision and a batch size of 1 are used only for the CoQA dataset. All other experiments are conducted with FP16 precision and a batch size of 2. The random seed is consistently set to 42. Fine-tuning is performed using distributed LoRA training, where Blocks A,Blocks B, andBlocks Ceach maintain their own LoRA adapters. 4.2. Experimental Results 4.2.1. FL-LLaMA Performance In order to validate the effectiveness of FL-LLaMA, we compared the performance of FL-LLaMA with centralized LLaMA2-7B on various tasks including NLU, summarization generation, and conversational QA, as shown in Table 2. Experimental results show that FL-LLaMA and centralized LLaMA2-7B exhibit no significant performance differences, demonstrating the feasibility of the framework. Not only does FL- LLaMA perform comparably to centralized LLaMA2-7B, but the GPU memory consumption on the client side is also significantly reduced compared to centralized LLaMA2-7B. The comparison of their GPU memory usage is shown in Fig. 6. While centralized LLaMA2-7B occupies 27.9G of GPU memory, the FL-LLaMA client requires only 6G, representing a reduction of up to 82% in memory requirements. This reduction enables enterprises or departments that possess large amounts of high-quality private datasets but have limited computational resources to participate in federated training of LLMs. 14 Figure 6: Computation load on client side of centralized LLaMA2-7B and FL-LLaMA on ReCoRD dataset. Figure 7: Avg magnitude of hidden state hA(Blocks C =1) on MultiRC dataset. 4.2.2. Parallel Training For communication cost analysis, we randomly selected 1,000 samples from the ReCoRD dataset. In our experimental setup, all training used a batch size of 1 and adopted FP16 training to improve computational efficiency. For LoRA, the decomposition rank ris set to 8, and the scaling factor αis configured to 16, focusing specifically on the attention modules during fine-tuning. Additionally, the model is partitioned with Blocks A=1 and Blocks C=1. Each experiment is repeated five times, and we report the mean and standard deviation of the total communication time, as summarized in Tab. 3. The results show that sequential training takes the longest time, followed by client-batch training, and server hierarchical takes the shortest time. However, when the server can accommodate a larger batch size, increasing the number of clients in client-batch training effectively increases the batch size and thus reduces overall training time. 4.2.3. Effects of Noise Scales Since noise injection into the hidden states during forward propagation inevitably impacts the perfor- mance of distributed LLMs, we conducted extensive experiments to investigate how varying levels of noise affect model performance. The results demonstrate that introducing appropriate levels of noise leads to only 15 (a) Loss Curves for Noise Scales (Blocks A=1, Blocks C=1) (b) Loss Curves for Noise Scales (Blocks A=2, Blocks C=1) (c) Loss Curves for Noise Scales (Blocks A=3, Blocks C=1) Figure 8: FL-LLaMA: Noise Injection across | https://arxiv.org/abs/2505.15683v1 |
different number of blocks in Blocks Aon MultiRC dataset. Table 4: Different noise scales and blocks number of Blocks Aanalysis(F1a / EM) on MultiRC dataset. Noise Scale Blocks A = 1 Blocks A = 2 Blocks A = 3 0.0 77.48/40.82 77.82/39.24 77.89/41.34 0.02 77.89/38.09 77.87/39.66 78.61/42.39 0.05 74.57/29.17 77.07/38.41 78.95/41.03 0.1 59.11/8.18 72.45/23.71 75.68/34.10 a slight decrease in the performance of distributed LLMs, while significantly enhancing their security. The magnitude of noise injection significantly influences the stability and convergence speed of model training. Given that noise is directly injected into the hidden state hA, the relative magnitude between the noise and hAnaturally affects the model’s performance. To quantify this effect, we first fine-tuned the FL-LLaMA model on the MultiRC [68] with a batch size of 2 and a learning rate of 1 ×10−5, analyzing the average absolute magnitude of all elements within hidden state hA. According to Fig. 7, the average magnitude of the hidden state hAis approximately 0.02 ( num Blocks A=1). The average magnitude of hidden state hAincreases with the number of blocks in Blocks A. Based on this observation, we subsequently injected noise at various scales around this value during training to investigate how the noise magnitude affects both the convergence speed and downstream task performance. Training is conducted for 5,000 steps with FP16 precision. We used LoRA with a rank of 8 and a scaling factor α= 16. The learning rate is 1e-5, and the batch size is 2. The corresponding experimental results are presented in Fig. 8a-8c and Tab. 4. Fig. 8a-8c demonstrate that when the number of blocks in Blocks AandBlocks Care held constant, the convergence speed of FL- LLaMA decreases as the magnitude of injected noise increases. When the average absolute value of the noise is small (e.g., below 0.02), the loss curve remains close to that of the noise-free baseline. A horizontal comparison reveals that, under the same noise level, increasing the number of blcoks in Blocks Aconsistently accelerates convergence. Tab. 4 indicates that under fixed noise levels, increasing the number of blocks in Blocks Aconsistently improves downstream performance in terms of both F1 and EM scores. Notably, when the noise ampli- tude reaches 0.1, only the single-block configuration of Blocks Ashows a significant drop in downstream performance, whereas the three-block version of Blocks Ademonstrates greater robustness. This can be explained by the average magnitude shown in Fig. 7: as the depth of Blocks Aincreases, the ratio between the average absolute value of the hidden state hAand the noise amplitude also increases, thereby improving the signal-to-noise ratio (SNR). This enhanced SNR is precisely what contributes to the better convergence and stronger robustness observed with a larger number of Blocks A. 16 Table 5: Security analysis (Rouge-2 F1 / BlEU-4) of FL-LLaMA on Record dataset. Noise scale Blocks A=1 Blocks A=2 Blocks A=3 0.0 0.051/0.003 0.056/0.003 0.069/0.016 0.02 0.050/0.002 0.052/0.003 0.084/0.044 0.05 0.050/0.003 0.051/0.002 0.067/0.004 Table 6: Performance (EM) under different noise scales and blocks number of Blocks Aon Record dataset. Noise Scale Blocks A = 1 Blocks A = 2 Blocks A = | https://arxiv.org/abs/2505.15683v1 |
3 0.0 77.4 79.8 78.9 0.02 79.1 79.9 79.6 0.05 79.6 79.4 79.2 Figure 9: FL-LLaMA: Inference Speed Comparison (w/ vs. w/o Attention Mask). 4.2.4. Model Reverse Attack To investigate the impact of split point selection and noise scale on the security of FL-LLaMA, we evaluate the attack performance of the adversarial model under varying configurations of Blocks Adepth (Blocks A = 1, 2, 3) and noise (scale = 0, 0.02, 0.5). The attack model is trained via full-parameter fine- tuning with a learning rate of 2e-5. Reconstruction quality is evaluated using standard generation metrics BLEU-4 and ROUGE-2. FedBert reports a Rouge-2 of 7.053 and a Bleu-4 of 28.57. Experiment results of FL-LLaMA is shown in Tab. 5. From the results, we can see that when the client holds only the embedding layer (similar to FedBERT), the attack model achieves relatively high BLEU and ROUGE scores. However, under the FL-LLaMA framework, where the client also retains a portion of the LLMs block in addition to the embedding layer, all attack metrics approach zero. Tab. 6 reports the downstream performance of client 0after multi-client training, showing that secret attack allows training to proceed as usual without disrupting the federated process. These findings demonstrate that FL-LLaMA effectively mitigates model inversion attacks, thereby preserving the privacy of client-side data. 17 Figure 10: FL-LLaMA: Inference Speed Comparison(w/ vs. w/o KV Cache). Table 7: Different model partition analysis(F1a/EM) on MultiRC dataset. No Noise is added to the hidden states. Blocks A Blocks C = 1 Blocks C = 2 Blocks C = 3 1 80.34/44.60 80.63/46.59 80.90/47.11 2 80.41/45.86 80.62/45.96 79.94/45.02 3 81.17/47.22 80.74/46.59 80.20/45.33 4.2.5. Efficiency Optimization Fig. 9 shows a comparison of the time consumption between transferring the attention mask and trans- ferring only the actual input length during distributed training. The bar chart represents the average time spent per token in FL-LLaMA’s forward propagation, while the line chart represents the number of tokens generated per second by FL-LLaMA. It can be observed from the figure that transferring the compressed effective information of the attention mask plays a significant role in reducing the latency of distributed LLMs, with an average time reduction of 87.6%. Fig. 10 illustrates the effect of the KV cache technique on improving inference speed for different query lengths in distributed inference. The experimental results indicate that without the KV cache, the inference speed of FL-LLaMA significantly decreases as the query length increases, primarily due to the continuously increasing number of K and V vectors computed at each step in the self-attention mechanism. In contrast, with the KV cache technique, the client only needs to send the intermediate vector corresponding to a single token to the server, while reusing the previously generated K and V vectors, thereby effectively avoiding redundant computations. When the query length reaches 4000, the inference speed of FL-LLaMA with KV cache is 8.2 times that without it, and as the query length increases further, this advantage will continue to expand, greatly alleviating the computational burden and operational latency of distributed LLMs. 4.2.6. Adaptive Splitting The choice of model | https://arxiv.org/abs/2505.15683v1 |
partitioning has a significant impact on the computational burden at the client side. To evaluate whether FL-LLaMA maintains robust performance under different partitioning strategies, we investigate its downstream task performance when clients hold different number of blocks in Blocks Aand Blocks C(Blocks A = 1, 2, 3 and Blocks C = 1, 2, 3). This setting reflects realistic constraints where clients possess limited computational resources. All models are trained for 8,000 steps using FP16 precision. LoRA is applied with a rank of 16 and a scaling factor of α= 32. The training used a batch size of 2 and a learning rate of 2e-5. Tab. 7 presents F1a/EM scores under various partitioning schemes. The highest F1a/EM scores (81.17/47.22) emerge when Blocks A=3 and Blocks C = 1, indicating that allocating more early blocks 18 Table 8: Different model partitions analysis(F1a/EM) on MultiRC dataset. Noise (scale=0.02) is added to the states hA. Blocks A Blocks C = 1 Blocks C = 2 Blocks C = 3 1 79.44/38.41 79.58/38.61 79.45/39.66 2 80.10/41.55 80.14/38.61 79.99/40.40 3 79.90/40.29 80.29/43.02 80.44/42.18 to the client side can slightly enhance task performance. Overall, the performance differences across parti- tions remain small, suggesting that FL-LLaMA is relatively insensitive to how the model is split. From a practical standpoint, these findings confirm that clients with constrained resources can flexibly adjust how many blocks they store without substantially degrading accuracy. In scenarios where client-side memory or compute capacities are limited, reducing Blocks A and Blocks C remains viable, given that FL-LLaMA maintains robust performance across a range of partitions. Building upon partitioning experiment, we introduced controlled noise into the hidden states produced byBlocks A. Specifically, Gaussian noise with a scale of 0.02 is added during forward propagation before sending the hidden states to the server. Tab. 8 shows that while there is a slight drop compared to the noise-free scenario, the overall performance remains robust, indicating that moderate perturbations in the hidden states do not severely impact convergence or accuracy. Futher, the experimental results show that the higher the number of Blocks A, the smaller the performance degradation of FL-LLaMA on downstream tasks. This indicates that the resistance to noise is higher, which verifies that the signal-to-noise ratio has a significant impact on the performance of the model. 5. Conclusion In this work, we proposed FL-LLaMA, a novel federated split learning framework tailored for large language models. By securely offloading the majority of model computation to a central server while main- taining privacy-sensitive components on local clients, FL-LLaMA addresses the core limitations of traditional federated split learning approaches. Through secure end-to-end propagation with noise injection, parallel training, inference acceleration, and adaptive partitioning tailored to downstream tasks and hardware, FL- LLaMA demonstrates significant advances in security, efficiency, and adaptability. This federated split LLMs framework not only keeps users’ private data confined to their local devices but also significantly shortens training and inference time, making it better suited for handling the scale and complexity of LLMs. In future work, we aim to support more basement models and deploy FL-LLaMA framework in scenarios where | https://arxiv.org/abs/2505.15683v1 |
data privacy is a critical concern. References [1] J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, D. Amodei, Scaling laws for neural language models, arXiv preprint arXiv:2001.08361 (2020). [2] X. Yang, H. Yu, X. Gao, H. Wang, J. Zhang, T. Li, Federated continual learning via knowledge fusion: A survey, IEEE Transactions on Knowledge and Data Engineering 36 (8) (2024) 3832–3850. [3] R. Ye, W. Wang, J. Chai, D. Li, Z. Li, Y. Xu, Y. Du, Y. Wang, S. Chen, Openfedllm: Training large language models on decentralized private data via federated learning, in: Proceedings of the 30th ACM SIGKDD conference on knowledge discovery and data mining, 2024, pp. 6137–6147. [4] B. Ghimire, D. B. Rawat, Recent advances on federated learning for cybersecurity and cybersecurity for federated learning for internet of things, IEEE Internet of Things Journal 9 (11) (2022) 8229–8249. [5] W. Huang, M. Ye, Z. Shi, G. Wan, H. Li, B. Du, Q. Yang, Federated learning for generalization, robustness, fairness: A survey and benchmark, IEEE Transactions on Pattern Analysis and Machine Intelligence (2024). [6] P. Qi, D. Chiaro, A. Guzzo, M. Ianni, G. Fortino, F. Piccialli, Model aggregation techniques in federated learning: A comprehensive survey, Future Generation Computer Systems 150 (2024) 272–293. [7] L. Yuan, Z. Wang, L. Sun, P. S. Yu, C. G. Brinton, Decentralized federated learning: A survey and perspective, IEEE Internet of Things Journal (2024). [8] J. Chen, H. Yan, Z. Liu, M. Zhang, H. Xiong, S. Yu, When federated learning meets privacy-preserving computation, ACM Computing Surveys 56 (12) (2024) 1–36. 19 [9] J. Stremmel, A. Singh, Pretraining federated text models for next word prediction, in: Advances in Information and Communication: Proceedings of the 2021 Future of Information and Communication Conference (FICC), Volume 2, Springer, 2021, pp. 477–488. [10] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, V. Smith, Federated optimization in heterogeneous networks, Proceedings of Machine learning and systems 2 (2020) 429–450. [11] Y. Tian, Y. Wan, L. Lyu, D. Yao, H. Jin, L. Sun, Fedbert: When federated learning meets pre-training, ACM Transactions on Intelligent Systems and Technology (TIST) 13 (4) (2022) 1–26. [12] S. Gupta, Y. Huang, Z. Zhong, T. Gao, K. Li, D. Chen, Recovering private text in federated learning of language models, Advances in Neural Information Processing Systems 35 (2022) 8130–8143. [13] V. Asnani, X. Yin, T. Hassner, X. Liu, Reverse engineering of generative models: Inferring model hyperparameters from generated images, IEEE Transactions on Pattern Analysis and Machine Intelligence (2023). [14] J.-Y. Zheng, H. Zhang, L. Wang, W. Qiu, H.-W. Zheng, Z.-M. Zheng, Safely learning with private data: A federated learning framework for large language model, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024, pp. 5293–5306. [15] B. McMahan, E. Moore, D. Ramage, S. Hampson, B. A. y Arcas, Communication-efficient learning of deep networks from decentralized data, in: Artificial intelligence and statistics, PMLR, 2017, pp. 1273–1282. [16] A. Hard, K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, D. Ramage, | https://arxiv.org/abs/2505.15683v1 |
Federated learning for mobile keyboard prediction, arXiv preprint arXiv:1811.03604 (2018). [17] M. Chen, R. Mathews, T. Ouyang, F. Beaufays, Federated learning of out-of-vocabulary words, arXiv preprint arXiv:1903.10635 (2019). [18] S. Ji, S. Pan, G. Long, X. Li, J. Jiang, Z. Huang, Learning private neural language modeling with attentive aggregation, in: 2019 International joint conference on neural networks (IJCNN), IEEE, 2019, pp. 1–8. [19] T. Fan, Y. Kang, G. Ma, W. Chen, W. Wei, L. Fan, Q. Yang, Fate-llm: A industrial grade federated learning framework for large language models, arXiv preprint arXiv:2310.10049 (2023). [20] W. Kuang, B. Qian, Z. Li, D. Chen, D. Gao, X. Pan, Y. Xie, Y. Li, B. Ding, J. Zhou, Federatedscope-llm: A comprehensive package for fine-tuning large language models in federated learning, in: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2024, pp. 5260–5271. [21] H. Wu, X. Li, D. Zhang, X. Xu, J. Wu, P. Zhao, Z. Liu, Cg-fedllm: How to compress gradients in federated fune-tuning for large language models, CoRR (2024). [22] Y. Shu, W. Hu, S.-K. Ng, B. K. H. Low, F. R. Yu, Ferret: Federated full-parameter tuning at scale for large language models, in: International Workshop on Federated Foundation Models in Conjunction with NeurIPS 2024, 2024. [23] Z. JianHao, C. Lv, X. Wang, M. Wu, W. Liu, T. Li, Z. Ling, C. Zhang, X. Zheng, X.-J. Huang, Promoting data and model privacy in federated learning through quantized lora, in: Findings of the Association for Computational Linguistics: EMNLP 2024, 2024, pp. 10501–10512. [24] J. Liu, J. Ren, R. Jin, Z. Zhang, Y. Zhou, P. Valduriez, D. Dou, Fisher information-based efficient curriculum federated learning with large language models, in: Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, 2024, pp. 10497–10523. [25] F. Wu, Z. Li, Y. Li, B. Ding, J. Gao, Fedbiot: Llm local fine-tuning in federated learning without full model, in: Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2024, pp. 3345–3355. [26] T. Fan, Y. Kang, G. Ma, L. Fan, K. Chen, Q. Yang, Fedcollm: A parameter-efficient federated co-tuning framework for large and small language models, arXiv preprint arXiv:2411.11707 (2024). [27] T. Fan, G. Ma, Y. Kang, H. Gu, Y. Song, L. Fan, K. Chen, Q. Yang, Fedmkt: Federated mutual knowledge transfer for large and small language models, in: Proceedings of the 31st International Conference on Computational Linguistics, 2025, pp. 243–255. [28] J. Zhang, S. Vahidian, M. Kuo, C. Li, R. Zhang, T. Yu, G. Wang, Y. Chen, Towards building the federatedgpt: Federated instruction tuning, in: ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2024, pp. 6915–6919. [29] J. Bai, D. Chen, B. Qian, L. Yao, Y. Li, Federated fine-tuning of large language models under heterogeneous language tasks and client resources, arXiv e-prints (2024) arXiv–2402. [30] Z. Wang, Z. Shen, Y. He, G. Sun, H. Wang, L. Lyu, A. Li, Flora: Federated fine-tuning large language models with heterogeneous low-rank adaptations, CoRR (2024). [31] C. Thapa, P. C. M. Arachchige, S. Camtepe, L. Sun, Splitfed: When federated | https://arxiv.org/abs/2505.15683v1 |
learning meets split learning, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36, 2022, pp. 8485–8493. [32] A. Abedi, S. S. Khan, Fedsl: Federated split learning on distributed sequential data in recurrent neural networks, Multi- media Tools and Applications 83 (10) (2024) 28891–28911. [33] S. Abuadbba, K. Kim, M. Kim, C. Thapa, S. A. Camtepe, Y. Gao, H. Kim, S. Nepal, Can we use split learning on 1d cnn models for privacy preserving training?, in: Proceedings of the 15th ACM Asia Conference on Computer and Communications Security, 2020, pp. 305–318. [34] S. A. Rahman, H. Tout, C. Talhi, A. Mourad, Internet of things intrusion detection: Centralized, on-device, or federated learning?, IEEE Network 34 (6) (2020) 310–317. [35] Y. Matsubara, M. Levorato, Neural compression and filtering for edge-assisted real-time object detection in challenged networks, in: 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, 2021, pp. 2272–2279. [36] N. Su, C. Hu, B. Li, B. Li, Titanic: Towards production federated learning with large language models, in: IEEE INFOCOM 2024-IEEE Conference on Computer Communications, IEEE, 2024, pp. 611–620. [37] L. Zhu, Z. Liu, S. Han, Deep leakage from gradients, in: Proceedings of the 33rd International Conference on Neural 20 Information Processing Systems, 2019, pp. 14774–14784. [38] C. Dwork, F. McSherry, K. Nissim, A. Smith, Calibrating noise to sensitivity in private data analysis, in: Theory of Cryptography: Third Theory of Cryptography Conference, TCC 2006, New York, NY, USA, March 4-7, 2006. Proceedings 3, Springer, 2006, pp. 265–284. [39] K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. Quek, H. V. Poor, Federated learning with differential privacy: Algorithms and performance analysis, IEEE Transactions on Information Forensics and Security 15 (2020) 3454–3469. [40] Y. Huang, Z. Song, D. Chen, K. Li, S. Arora, Texthide: Tackling data privacy in language understanding tasks, in: Findings of the Association for Computational Linguistics: EMNLP 2020, 2020, pp. 1368–1382. [41] D. N. Yaldiz, T. Zhang, S. Avestimehr, Secure federated learning against model poisoning attacks via client filtering, in: ICLR 2023 Workshop on Backdoor Attacks and Defenses in Machine Learning, 2023. [42] D. Yu, S. Naik, A. Backurs, S. Gopi, H. A. Inan, G. Kamath, J. Kulkarni, Y. T. Lee, A. Manoel, L. Wutschitz, et al., Differentially private fine-tuning of language models, Journal of Privacy and Confidentiality 14 (2) (2024). [43] X. Li, F. Tramer, P. Liang, T. Hashimoto, Large language models can be strong differentially private learners, in: Inter- national Conference on Learning Representations, 2021. [44] A. Jalalirad, M. Scavuzzo, C. Capota, M. Sprague, A simple and efficient federated recommender system, in: Proceedings of the 6th IEEE/ACM international conference on big data computing, applications and technologies, 2019, pp. 53–58. [45] O. Thakkar, S. Ramaswamy, R. Mathews, F. Beaufays, Understanding unintended memorization in federated learning, arXiv preprint arXiv:2006.07490 (2020). [46] X.-Y. Liu, R. Zhu, D. Zha, J. Gao, S. Zhong, M. White, M. Qiu, Differentially private low-rank adaptation of large language model using federated learning, ACM Transactions on Management Information Systems 16 (2) (2025) 1–24. [47] C. Dwork, Differential privacy, in: Proceedings of | https://arxiv.org/abs/2505.15683v1 |
the 33rd International Conference on Automata, Languages and Pro- gramming - Volume Part II, ICALP’06, Springer-Verlag, Berlin, Heidelberg, 2006, p. 1–12. doi:10.1007/11787006_1 . URL https://doi.org/10.1007/11787006_1 [48] Y. Sun, Z. Li, Y. Li, B. Ding, Improving lora in privacy-preserving federated learning, CoRR (2024). [49] E. Erdo˘ gan, A. K¨ up¸ c¨ u, A. E. C ¸i¸ cek, Unsplit: Data-oblivious model inversion, model stealing, and label inference attacks against split learning, in: Proceedings of the 21st Workshop on Privacy in the Electronic Society, 2022, pp. 115–124. [50] D. Pasquini, G. Ateniese, M. Bernaschi, Unleashing the tiger: Inference attacks on split learning, in: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, 2021, pp. 2113–2129. [51] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, L. Zhang, Deep learning with differential privacy, in: Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, 2016, pp. 308–318. [52] K. Clark, Computing neural network gradients, unpublished Journal (2017). [53] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al., Llama 2: Open foundation and fine-tuned chat models, arXiv preprint arXiv:2307.09288 (2023). [54] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, P. J. Liu, Exploring the limits of transfer learning with a unified text-to-text transformer, Journal of machine learning research 21 (140) (2020) 1–67. [55] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, L. Zettlemoyer, Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 7871–7880. [56] Z. Du, Y. Qian, X. Liu, M. Ding, J. Qiu, Z. Yang, J. Tang, Glm: General language model pretraining with autoregressive blank infilling, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, pp. 320–335. [57] A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, M. Ding, Z. Yang, Y. Xu, W. Zheng, X. Xia, et al., Glm-130b: An open bilingual pre-trained model, arXiv preprint arXiv:2210.02414 (2022). [58] A. Wang, Y. Pruksachatkun, N. Nangia, A. Singh, J. Michael, F. Hill, O. Levy, S. R. Bowman, Superglue: A stickier benchmark for general-purpose language understanding systems (2020). arXiv:1905.00537 . URL https://arxiv.org/abs/1905.00537 [59] S. Reddy, D. Chen, C. D. Manning, Coqa: A conversational question answering challenge, Transactions of the Association for Computational Linguistics 7 (2019) 249–266. [60] S. Narayan, S. B. Cohen, M. Lapata, Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018, pp. 1797–1807. [61] M. Yatskar, A qualitative comparison of coqa, squad 2.0 and quac, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019, pp. 2318–2323. [62] C. You, N. Chen, F. Liu, S. Ge, X. Wu, Y. Zou, End-to-end spoken conversational question answering: | https://arxiv.org/abs/2505.15683v1 |
Task, dataset and model, in: Findings of the Association for Computational Linguistics: NAACL 2022, 2022, pp. 1219–1232. [63] J. Liu, Y. Zou, H. Zhang, H. Chen, Z. Ding, C. Yuan, X. Wang, Topic-aware contrastive learning for abstractive dialogue summarization, in: Findings of the Association for Computational Linguistics: EMNLP 2021, 2021, pp. 1229–1243. [64] J. Chen, D. Yang, Multi-view sequence-to-sequence models with conversational structure for abstractive dialogue summa- rization, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020, pp. 4106–4118. [65] Y. Fang, H. Zhang, H. Chen, Z. Ding, B. Long, Y. Lan, Y. Zhou, From spoken dialogue to formal summary: An utterance rewriting for dialogue summarization, in: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022, pp. 3859–3869. 21 [66] T. GLM, A. Zeng, B. Xu, B. Wang, C. Zhang, D. Yin, D. Rojas, G. Feng, H. Zhao, H. Lai, H. Yu, H. Wang, J. Sun, J. Zhang, J. Cheng, J. Gui, J. Tang, J. Zhang, J. Li, L. Zhao, L. Wu, L. Zhong, M. Liu, M. Huang, P. Zhang, Q. Zheng, R. Lu, S. Duan, S. Zhang, S. Cao, S. Yang, W. L. Tam, W. Zhao, X. Liu, X. Xia, X. Zhang, X. Gu, X. Lv, X. Liu, X. Liu, X. Yang, X. Song, X. Zhang, Y. An, Y. Xu, Y. Niu, Y. Yang, Y. Li, Y. Bai, Y. Dong, Z. Qi, Z. Wang, Z. Yang, Z. Du, Z. Hou, Z. Wang, Chatglm: A family of large language models from glm-130b to glm-4 all tools (2024). arXiv:2406.12793 . [67] D. J. Beutel, T. Topal, A. Mathur, X. Qiu, J. Fernandez-Marques, Y. Gao, L. Sani, H. L. Kwing, T. Parcollet, P. P. d. Gusm˜ ao, N. D. Lane, Flower: A friendly federated learning research framework, arXiv preprint arXiv:2007.14390 (2020). [68] D. Khashabi, S. Chaturvedi, M. Roth, S. Upadhyay, D. Roth, Looking beyond the surface: A challenge set for reading comprehension over multiple sentences, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2018, pp. 252–262. 22 | https://arxiv.org/abs/2505.15683v1 |
ThinkLess: A Training-Free Inference-Efficient Method for Reducing Reasoning Redundancy Gengyang Li1,2*Yifeng Gao1,2*Yuming Li2Yunfang Wu1,3† 1National Key Laboratory for Multimedia Information Processing, Peking University 2School of Software and Microelectronics, Peking University 3School of Computer Science, Peking University {ligengyang, yifgao26, 2301210310}@stu.pku.edu.cn wuyf @pku.edu.cn Abstract While Chain-of-Thought (CoT) prompting im- proves reasoning in large language models (LLMs), the excessive length of reasoning to- kens increases latency and KV cache memory usage, and may even truncate final answers un- der context limits. We propose ThinkLess , an inference-efficient framework that terminates reasoning generation early and maintains out- put quality without modifying the model. Att- tention analysis reveals that answer tokens fo- cus minimally on earlier reasoning steps and primarily attend to the reasoning terminator to- ken, due to information migration under causal masking. Building on this insight, ThinkLess inserts the terminator token at earlier positions to skip redundant reasoning while preserving the underlying knowledge transfer. To prevent format discruption casued by early termina- tion, ThinkLess employs a lightweight post- regulation mechanism, relying on the model’s natural instruction-following ability to produce well-structured answers. Without fine-tuning or auxiliary data, ThinkLess achieves compara- ble accuracy to full-length CoT decoding while greatly reducing decoding time and memory consumption. 1 Introduction Large language models (LLMs) (Vaswani et al., 2017; Zhang et al., 2025b) have achieved remark- able progress in natural language understanding and generation, but still struggle with tasks requir- ing multi-step reasoning. Chain-of-Thought (CoT) prompting (Wei et al., 2022) has emerged as a popu- lar approach to address this issue, enabling models to decompose problems into intermediate reason- ing steps before producing an answer. While CoT improves accuracy on challenging benchmarks (Zhang et al., 2022; Jaech et al., 2024), it comes at a cost: reasoning tokens tend to be *Equal contribution. †Corresponding author. Figure 1: GPQA (Rein et al., 2024) accuracy of DeepSeek- R1-Distill-LLaMA-8B (Guo et al., 2025) under varying token budgets. Red: ThinkLess (compressed reasoning); Blue: full CoT reasoning.The left part of the legend illustrates the re- lationship between marker size and latency, the middle part denotes each methods, and the right part presents the maxi- mum accuracy and corresponding latency of each method. long and autoregressively generated, introducing substantial latency and memory overhead during inference. As shown in Figure 1, increasing the token budget does improve accuracy–but the gains diminishes rapidly, indicating clear marginal re- turns. Beyond a certain point, longer generations incur significantly higher computational cost with- out meaningful performance improvement. In de- ployment scenarios where user experience and re- sponse time are critical, such overhead becomes a practical bottleneck, making blind expansion of reasoning length both inefficient and unsustainable. Several efforts aim to improve CoT efficiency through techniques such as feedback-based refine- ment (Yao et al., 2023b), search and planning (Bi et al., 2024; Ye et al., 2024), and iterative optimiza- tion (Zhang et al., 2024). While effective in con- trolled settings, these approaches typically rely on additional training ,curated datasets , orsupervised fine-tuning (SFT) –introducing significant engineer- ing overhead. Moreover, their reliance on task- specific data or model customization limits gener-arXiv:2505.15684v2 [cs.CL] 23 May 2025 | https://arxiv.org/abs/2505.15684v2 |
alizability, making them difficult to scale or deploy in real-world systems where flexibility, modularity, and minimal intervention are critical. We introduce ThinkLess , an inference-efficient framework that reduces DeepSeek-R1 (Guo et al., 2025) distilled CoT reasoning overhead without any model modification or additional training . Our key insight stems from an attention analysis: dur- ing answer generation, models rely minimally on earlier reasoning steps and focus on dispropor- tionately on the reasoning terminator tokens ( e.g., </think >). This indicates reasoning information is progressively migrated and compressed toward the end of the reasoning sequence due to causal attention (Lin et al., 2025). However, naively truncating reasoning by insert- ing the terminator token early often results in dis- rupted output formats. To solve, ThinkLess em- ploys a lightweight output regulation that guides the model to produce well-structured responses. This is implemented simply by appending a small instruction prompt after early termination, leverag- ing the model’s inherent instruction-following capa- bilities. This post-regulation step requires no model modification or fine-tuning, yet proves essential for maintaining output consistency and restoring accu- racy degraded by premature reasoning truncation. ThinkLess achieves substantial efficiency gains. As illustrated in Figure 1, ThinkLess reaches strong performance at a much lower token budget com- pared to full CoT decoding, and further reduces in- ference latency, as reflected by smaller sizes. These results demonstrate that long-form reasoning is not always necessary; with proper output regulation, shortened reasoning can retain accuracy while dra- matically improving inference efficiency. Our contributions are as follows: •We present an attention-based analysis reveal- ing that answer tokens in CoT generation at- tend minimally to earlier reasoning steps, in- dicating substantial redundancy. •We propose ThinkLess, a training-free early termination strategy that injects a reasoning terminator token to truncate redundant reason- ing while preserving core information. •To mitigate format disruption caused by early termination, we introduce a lightweight output regulation mechanism that restores structured answers using a minimal instruction prompt.•ThinkLess achieves comparable performance than full CoT decoding with fewer tokens and lower inference cost, offering a plug-and-play solution deployable across models and tasks. 2 Related Work 2.1 LLMs Reasoning Reasoning is a fundamental capability of LLMs, enabling them to tackle complex multi-step tasks across diverse domains (Qiao et al., 2022). To en- hance this ability, recent work has explored various prompting and architectural strategies. Chain-of- Thought (CoT) prompting (Wei et al., 2022) has emerged as a foundational method, guiding mod- els to generate intermediate reasoning steps before producing final answers. This decomposition of complex problems into sub-goals significantly im- proves performance on arithmetic, commonsense, and scientific reasoning benchmarks (Kojima et al., 2022; Feng et al., 2023; Rein et al., 2024; Lyu et al., 2023). Building on CoT, techniques such as Self- Consistency (Wang et al., 2022) aggregate multiple reasoning paths to improve robustness, while Tree- of-Thoughts (Yao et al., 2023a) introduces struc- tured planning via tree-based exploration. More recently, advanced frameworks like OpenAI’s o1, Alibaba’s QwQ (Team, 2025), and DeepSeek’s R1 (Guo et al., 2025) have extended CoT by incor- porating reflective reasoning modes such as trial- and-error, backtracking, and self-correction | https://arxiv.org/abs/2505.15684v2 |
(Shinn et al., 2023). 2.2 CoT Compression While deeper reasoning improves performance, it often comes with diminishing returns and increas- ing computational cost (Chen et al., 2024; Wu et al., 2024). Excessively long reasoning sequences not only prolong inference but also strain mem- ory and may even degrade output quality (Liu et al., 2025b,a). Recent work has thus focused on efficient CoT generation, which falls into two broad categories: training-based compression and inference-time optimization (Qu et al., 2025a; Sui et al., 2025). Training-based methods learn more compact reasoning traces through supervised fine- tuning. Some approaches compress CoT chains at the token level (Han et al., 2024; Xia et al., 2025), dynamically adjusting reasoning length based on task difficulty (Hao et al., 2024; Zhang et al., 2025a). Others replace explicit token-level reason- ing with latent or abstract representations (Chen et al., 2024; Shen et al., 2025; Qu et al., 2025b), compressing the reasoning into a hidden state or learned vector. Inference-time methods, by con- trast, improve efficiency without modifying model weights. These include Sketch-of-Thought (Aytes et al., 2025; Xu et al., 2025), which generate con- cise draft reasoning before producing final outputs, balancing coherence and computational cost. Our ThinkLess, aligns with this line of inference- time CoT optimization but differs by being entirely training-free and model-agnostic, particularly for DeepSeek-R1 distilled models. Rather than com- pressing reasoning through learning, ThinkLess truncates redundant reasoning tokens based on at- tention insights and restores output quality through a lightweight post-regulation mechanism. 3 Methodology We present ThinkLess , atraining-free framework designed to improve inference efficiency for CoT reasoning in LLMs. ThinkLess achieves this by (1) identifying redundancy in long reasoning traces via attention and hidden state analyses, and (2) in- troducing a lightweight termination and regulation mechanism that preserves output accuracy and for- mat while significantly reducing decoding cost. 3.1 CoT Bottlenecks at Inference Problem Formulation. Given a question q, LLM generates a sequence of tokens x1:Nautoregres- sively, where each token xiis sampled based on the conditional probability p(xi|q, x<i). In CoT prompting, this sequence can be divided into rea- soning tokens xreason 1:Mand answer tokens xanswer 1:N: p(xreason 1:M|q) =MY i=1p(xreason i|q, xreason <i )(1) p(xanswer 1:N|q, xreason 1:M) = NY i=1p(xanswer i|q, xreason 1:M, xanswer <i ).(2) Inference-Time Bottlenecks. While reasoning tokens can enhance the model’s ability to arrive at a more accurate answer during training, they intro- duce significant overhead during inference. Specif- ically, long reasoning sequences lead to increased computational costs, higher memory usage (due to the expanded KV cache (Qin et al., 2025)), andlonger response times. This is particularly problem- atic in applications where quick answer responses are crucial, such as interactive AI systems. Also, long reasoning paths may consume the context budget before generating answers, ren- dering the reasoning benefits inaccessible. This mismatch between computation and usable output severely undermines the efficacy of CoT at infer- ence time. We empirically observe this issue in Fig- ure 1, where the model’s performance noticeably degrades when the total token length falls below 213. One key reason is that the answer segment is often truncated | https://arxiv.org/abs/2505.15684v2 |
due to limited context, prevent- ing the model from fully leveraging the reasoning process it has computed. Motivation. These challenges expose a core in- efficiency in current CoT generation: even if rea- soning is computed, the final answer may not be delivered due to truncation, or its benefits may be outweighed by the added inference burden. These raises an important question: how much of the rea- soning is actually needed to support answer gen- eration? In Section 3.2, we examine the model’s internal attention behavior during decoding to in- vestigate this question more closely. Section 3.3 then presents a termination mechanism with min- imal formatting disruption, enabling efficient and accurate CoT inference. 3.2 Attention Reveals Redundancy in CoT Reasoning To understand why long-form CoT reasoning in- curs high cost but limited benefit, we analzye the model’s attention behavior during answer genera- tion. Our goal is to examine whether all reasoning tokens are equally useful—or if, as we hypothesize, later reasoning tokens alone may carry the neces- sary information for generating accurate answers. We visualize attention patterns across trans- former layers using DeepSeek-R1-Distill-Llama- 8B on GSM8K samples, as shown in Figure 2. Each heatmap represents the attention weights from query tokens (rows) to key tokens (columns) during autoregressive decoding. The < think > and </think > tokens mark the boundaries of the rea- soning span. In early layers, the model distributes attention broadly across the reasoning region, sug- gesting that its initially considers the full reasoning race. However, as depth increases, the model’s focus sharpens toward the end-of-reasoning bound- ary, particularly the < /think > token. This transi- (a) Layer 1 (b) Layer 4 (c) Layer 6 (d) Layer 12 (e) Layer 21 (f) Layer 25 (g) Layer 29 (h) Layer 31 Figure 2: Attention heatmaps across different layers of DeepSeek-R1-Distill-LLaMA-8B on a GSM8K sam- ple (Cobbe et al., 2021). Tokens within the < think >...</think > span receive uniform attention in early layers, but deeper layers gradually shift focus to the boundary tokens, indicating information migration and compression of reasoning content. Similar observations can be found in other models and datasets tion implies a progressive information migration phenomenon, where reasoning content is gradually compressed toward the end of the span. We attribute this behavior to causal masking: under left-to-right generation, downstream tokens cannot access future context. As results, reason- ing must be internally summarized and propa- gated forward token by token. This leads earlier reasoning tokens to fade from view, while later tokens—particularly < /think >—accumulate and represent the distilled reasoning state. Similar phe- nomena have been explored by (Lin et al., 2025). Analyzing Reasoning Redundancy. Building on the information migrration mechanism dis- cussed above, we ask: How early can useful reason- ing be distilled during generation? Since reason- ing content is expected to progressively compress toward the end of the span ( e.g., </think >), we hypothesize that inserting this token at intermedi- ate positions during decoding should yield hidden states that already approximate the final reasoning state. If true, this would suggest that the model has already internalized most of | https://arxiv.org/abs/2505.15684v2 |
the reasoning content before completing the full chain. To test this hypothesis, we conduct a similarity- based redundancy analysis. Specifically, we insert the </think > token at a fixed segment length of 16 tokens during the reasoning generation process using DeepSeek-R1-Distill-Qwen-7B. At each in- 0 1 2 3 4 5 6 Segment Index0 1 2 3 4 5 6Segment Index1.00 0.91 1.00 0.89 0.88 1.00 0.79 0.82 0.77 1.00 0.79 0.83 0.81 0.85 1.00 0.80 0.86 0.82 0.83 0.92 1.00 0.73 0.77 0.75 0.77 0.91 0.90 1.00Similarity, Dataset = GSM8k, Sample Index = 59 0.00.20.40.60.81.0value(a)Sample idx = 59 0 1 2 3 4 Segment Index0 1 2 3 4Segment Index1.00 0.89 1.00 0.73 0.79 1.00 0.79 0.87 0.90 1.00 0.72 0.76 0.84 0.89 1.00Similarity, Dataset = GSM8k, Sample Index = 231 0.00.20.40.60.81.0value (b)Sample idx = 231 0 1 2 3 4 Segment Index0 1 2 3 4Segment Index1.00 0.89 1.00 0.80 0.86 1.00 0.78 0.89 0.88 1.00 0.73 0.83 0.81 0.89 1.00Similarity, Dataset = GSM8k, Sample Index = 321 0.00.20.40.60.81.0value (c)Sample idx = 321 0 1 2 3 4 Segment Index0 1 2 3 4Segment Index1.00 0.88 1.00 0.86 0.91 1.00 0.80 0.88 0.87 1.00 0.71 0.77 0.80 0.90 1.00Similarity, Dataset = GSM8k, Sample Index = 340 0.00.20.40.60.81.0value (d)Sample idx = 340 0 1 2 3 4 5 Segment Index0 1 2 3 4 5Segment Index1.00 0.88 1.00 0.88 0.86 1.00 0.85 0.85 0.93 1.00 0.84 0.85 0.88 0.87 1.00 0.78 0.77 0.84 0.86 0.89 1.00Similarity, Dataset = GSM8k, Sample Index = 510 0.00.20.40.60.81.0value (e)Sample idx = 510 0 1 2 3 4 Segment Index0 1 2 3 4Segment Index1.00 0.88 1.00 0.87 0.88 1.00 0.83 0.82 0.86 1.00 0.80 0.80 0.84 0.95 1.00Similarity, Dataset = GSM8k, Sample Index = 849 0.00.20.40.60.81.0value (f)Sample idx = 849 Figure 3: We insert a < /think > token every 16 tokens in DeepSeek-R1-Distill-Qwen-7B and extract last-layer hidden states. These states are highly similar (0.9) across segments, showing that reasoning adds little new information. The final state is also similar to earlier ones, indicating early convergence and redundancy in later reasoning. Similar observations can be found across other models and datasets. Best view with zooming in. sertion point, we extract the last-layer hidden state of the < /think > token, treating it as the representa- tion of accumulated reasoning up to that step. We then compute pairwise cosine similarities between these intermediate hidden states. As shown in Figure 3, the similarity between adjacent reasoning segments remains consistently high (∼0.9), indicating that each additional seg- Figure 4: Accuracy of DeepSeek-R1-Distill-Qwen-7B vs.position where < /think > is inserted. The bench- mark is BBH dataset (Suzgun et al., 2022). ment introduces only marginal new informa- tion. Moreover, the similarity between the final </think > state and earlier ones increases steadily, confirming the progressive nature of reasoning ag- gregation. Notably, even early inserted < /think > tokens already yield hidden states highly similar to the final one—supporting the view that most useful reasoning content is distilled early, and extended CoT traces incur diminishing returns. 3.3 ThinkLess: Reasoning Termination and Output | https://arxiv.org/abs/2505.15684v2 |
Regulation Building on our earlier conclusion that the most useful reasoning content is distilled early, we ask: can reasoning be safely truncated early without sacrificing answer quality? Since the model gradu- ally compresses reasoning into the < /think > token, it may be possible to shorten the reasoning trace while still preserving essential information. To verify this, we divide the full reasoning se- quence into equal-length segments and insert the </think > token at varying cut-off points, thereby terminating reasoning at different locations. We then measure model accuracy across termination positions. Surprisingly, as shown in Figure 4, trun- cating reasoning early leads to a decreasing accu- racy—despite our hypothesis that essential infor- mation should have already migrated toward the </think > token. This unexpected decline gradu- ally recovers as the termination point moves later, forming a U-shaped performance curve. A more detailed investigation in Sec. A shows that the observed decline in performance is not at- tributable to deficiencies in the model’s reasoning process. Instead, the drop primarily arises from output formatting issues—such as the omission of the final answer or deviations from the expected re-sponse structure. These formatting errors can lead to incorrect evaluations, even when the model’s internal reasoning is logically sound. Notably, after manually correcting these malformed outputs to align with the desired answer format, we find the underlying responses are indeed accurate, resulting in a substantial recovery in overall accuracy. This confirms that the observed accuracy dip is a surface-level artifact: early termination disrupts output form, not semantic correctness. The model had already internalized the reasoning; it simply failed to express it in the expected format. These results confirm that the model primarily relies on the < /think > token to access reasoning information—rather than attending to every rea- soning token individually. As a result, extending the reasoning span offers limited benefit, revealing substantial redundancy in long-form CoT. ThinkLess Framework. We introduce Think- Less , a simple, training-free framework to reduce CoT inference cost. The key idea is to insert the </think > token shortly after < think >, thereby skipping the majority of reasoning generation. This early termination substantially reduces decoding time and KV cache memory usage. However, such abrupt truncation may produce malformed answers that lack structural completeness. To overcome this dilemma, ThinkLess employs a lightweight instruction-based output regulation step. For each task, we prepend a short instruction prompt (see Sec. B) to clarify output expectations. This approach leverages the strong instruction- following abilities of modern LLMs, enabling the model to produce well-structured responses—even in the absence of explicit reasoning. Since the added instruction is minimal, the overall inference cost remains low. Clarification: ThinkLess Without Explicit Rea- soning. ThinkLess inserts the < /think > token right after < think >, thereby skipping the genera- tion of any explicit CoT reasoning. At first glance, this appears to challenge the information migration hypothesis : if no intermediate reasoning tokens are produced, it is unclear what reasoning content, if any, is being transferred to inform the final answer. We contend, however, that the < /think > token serves a | https://arxiv.org/abs/2505.15684v2 |
deeper function than a mere delimiter. It acts as a semantic anchor —a learned symbolic abstraction that implicitly encodes a compressed representation of the reasoning process. During pretraining, language models likely acquire the ability to internalize multi-step reasoning patterns and embed this abstracted knowledge into com- pact markers such as < /think >. This hypothesis is supported by our empirical observations: even when the reasoning trace is entirely omitted, the model frequently produces correct answers, indi- cating that the cognitive process of reasoning may have been executed internally and silently. From this perspective, < /think > does not denote the absence of reasoning, but rather the culmination of an internalized reasoning trajectory. It signals to the model that deliberation has concluded and that it should proceed to answer generation. This behavior can be interpreted as a form of reasoning distillation , in which the explicit explanatory steps are compressed into latent activations, allowing for both efficient inference and high-quality outputs without requiring full CoT generation. 4 Experiment 4.1 Datasets To comprehensively evaluate our proposed method across diverse reasoning and knowledge-intensive scenarios, we conduct experiments on the follow- ing four benchmark datasets: •GSM8K (Cobbe et al., 2021): A grade-school level math word problem dataset to assess arithmetic reasoning. Each question requires multi-step calculation and logical deduction. •MMLU (Hendrycks et al., 2020): It covers 57 tasks across various domains including hu- manities, STEM, and social sciences, measur- ing general knowledge and reasoning ability. •GPQA (Rein et al., 2024): A graduate-level physics question answering dataset targeting conceptual understanding. It tests model ca- pability in high-level scientific reasoning. •BBH (Suzgun et al., 2022): This subset fo- cuses on difficult tasks that require multi-step, symbolic, or logical reasoning, offering a rig- orous stress test for language models. 4.2 Metrics We report three key evaluation metrics across all tasks to provide a comprehensive comparison of both performance and efficiency: Top-1 accuracy (Top@ 1↑), inference time (Time ↓), and token us- age (Tokens ↓). Accuracy reflects the percentage ofexact top-1 matches. All results are from a single run. Given that ThinkLess omits the explicit rea- soning, we also report Top- kaccuracy (Top@ k) (k≥2) for ThinkLess variants. In this setup, the model is allowed to generate kcandidate answers for each question, and the response is considered correct if any of them is accurate. This allows us to assess ThinkLess under a relaxed evaluation regime, which reflects its ability to retain answer quality even when reasoning tokens are suppressed. To ensure fair comparison, we constrain the total number of generated tokens in the Top@ ksetting to remain comparable to the token budget used by standard CoT decoding ( i.e., Top@ 1with full reasoning). This enables an apples-to-apples evalu- ation of accuracy under equivalent costs of tokens. 4.3 Backbones and Baselines To ensure a comprehensive and fair evaluation, we conduct experiments on publicly available LLMs within the 7B to 14B parameter scale. This range reflects the practical constraints imposed by our available GPU resources, while still covering mod- els with strong reasoning capabilities. Backbone. Qwen2.5-7B/14B (Yang et al., 2024): A family | https://arxiv.org/abs/2505.15684v2 |
of powerful open-source instruction-tuned models known for their strong general reasoning abilities. LLaMA3.1-8B (Grattafiori et al., 2024): A well-balanced model from the LLaMA series that combines efficient inference with competitive instruction-following performance. All backbones are evaluated under identical decoding settings to ensure a consistent comparison. Baselines. We compare ThinkLess against a sin- gle, strong baseline: the full CoT distilled variant. This model is obtained by distilling reasoning ca- pabilities from a more powerful DeepSeek-R1, and it represents a high-performance upper bound. ThinkLess requires no fine-tuning, no auxiliary data, and no changes to the underlying model weights. To our best knowledge, we are the first to offer such efficient CoT reasoning compression in a fully training-free manner. Given this setting, the distilled full CoT model provides the most appro- priate and meaningful baseline for comparison. 4.4 How Effective is ThinkLess? Table 1 and Figure 5 present a detailed comparison between our proposed ThinkLess framework and theDistill baseline. The maximum token budget is MethodGSM8K MMLU GPQA BBH A VG. Top@ 1↑Time↓Tokens ↓Top@ 1↑Time↓Tokens ↓Top@ 1↑Time↓Tokens ↓Top@ 1↑Time↓Tokens ↓Top@ 1↑Time↓Tokens ↓ Qwen2.5-7B Distill 88.17 10.62 438.92 60.86 47.01 1817.84 30.81 148.82 5523.17 69.29 24.79 976.08 62.28 57.81 2189.00 ThinkLess w/o Instruct 87.79 6.57 274.20 54.04 6.77 279.50 31.31 15.39 631.87 62.02 8.57 341.91 58.79 9.33 381.87 ThinkLess 88.40 5.46 235.41 57.06 9.07 370.34 40.91 14.59 591.17 65.25 9.34 379.32 62.91 9.62 394.06 Qwen2.5-14B Distill 92.12 20.37 508.40 81.40 62.20 1516.46 41.92 217.62 5205.02 83.84 55.48 1349.88 74.82 88.92 2144.94 ThinkLess w/o Instruct 92.42 9.94 252.49 75.95 12.06 300.33 39.39 24.69 612.79 76.36 11.04 275.33 71.03 14.43 360.24 ThinkLess 92.49 9.05 235.32 76.44 14.84 361.92 44.95 22.34 547.43 78.38 14.73 351.00 73.07 15.24 373.92 LLaMA3.1-8B Distill 79.38 12.95 493.70 64.07 56.69 2119.48 25.76 162.79 6094.77 71.92 33.21 1252.02 60.28 66.41 2489.99 ThinkLess w/o Instruct 79.76 6.99 270.19 57.55 7.85 298.14 30.30 15.75 600.88 65.45 8.28 315.91 58.27 9.72 371.28 ThinkLess 78.92 6.73 260.74 60.27 10.23 384.55 31.31 48.81 1817.93 71.92 11.45 430.89 60.61 19.31 723.53 Table 1: Comparison of our ThinkLess and DeepSeek-R1 distilled models. set as 8k in Table 1. We detail accuracy, inference time, and token consumption below. Comparable Accuracy Despite Omitting Rea- soning. While ThinkLess entirely skips the visi- ble CoT reasoning trace, its Top@ 1accuracy re- mains consistently close to that of the full CoT base- line. For example, with Qwen2.5-7B, ThinkLess achieves an average accuracy of 62.91%, compared to 62.28% from Distill. With Qwen2.5-14B, Thin- kLess reaches 73.07% vs. 74.82%. These small differences—within 1–2 points—demonstrate that ThinkLess retains most of the reasoning quality, validating our core hypothesis: reasoning can be effectively compressed into latent representations without explicit CoT generation. Enhanced Accuracy under Comparable To- ken Budgets. Figure 5 presents the Top@ kaccu- racy of ThinkLess compared against the Top@1 accuracy of the full CoT Distill baseline, under an equal token budget. The results show that Think- Less significantly outperforms the distilled counter- part across various datasets and model backbones. Notably, beyond accuracy improvements, Think- Less also achieves lower inference latency. This is because the kcandidate answers in ThinkLess can be generated | https://arxiv.org/abs/2505.15684v2 |
in parallel, whereas the distilled baseline must generate a long CoT sequence token by token in an inherently sequential manner. Massive Reductions in Token Usage and In- ference Time. ThinkLess achieves substantial ef- ficiency gains across all settings. On average: 1. Token usage is reduced by 60–70% , dropping from 2189 tokens (Qwen2.5-7B, Distill) to just 904 with ThinkLess. 2. Inference time is reduced by 50% or more ,e.g., from 21.89s to 9.64s with Qwen2.5-14B. These savings stem from truncating long reasoning sequences early via < /think >, which eliminates most of the token generation and KV cache ac- cumulation that typically burdens autoregressiveinference. Crucially, these gains come without any fine-tuning, distillation, or prompt engineer- ing, making ThinkLess easy to deploy. Robustness Across Models and Tasks. Al- though ThinkLess occasionally underperforms on specific datasets ( e.g., slightly lower on BBH with Qwen2.5-14B), its average accuracy is remarkably stable across all backbones. This consistency in- dicates that our method generalizes well across diverse reasoning tasks and model families. The Role of Output Regulation. Comparing ThinkLess to its ablated version ThinkLess w/o Instruct highlights the impact of our lightweight instruction-based output regulation. Across all settings, ThinkLess consistently outperforms the w/o Instruct variant in Top@1 accuracy, often by a significant margin. For instance: On MMLU with Qwen2.5-14B: ThinkLess achieves 76.44% vs. 75.22%. On BBH with LLaMA3-8B: 71.92% vs. 65.45%, a gap of over 6 points. Figure 6 illustrates the answer agreement be- tween Distill and ThinkLess w/o Instruct across datasets and backbones. Across most of the datasets, over 70% of predictions remain consis- tent ( i.e., <True, True> or <False, False>), demon- strating ThinkLess can well preserve ability of the Distill model despite its early termination. This confirms that without output regulation, the model—though internally sound—frequently fails to produce well-structured answers ( e.g., missing final choice or wrong format). The addition of a short task-specific instruction guides the model to produce answers in a predictable and scorable format, which is critical for maintaining accuracy in the absence of full reasoning traces. Summary. ThinkLess achieves comparable Top@1 accuracy to full CoT reasoning while halv- ing inference time and reducing token usage by up to 70%, all in a training-free and model-agnostic 29210211212213 Generated tokens020406080Accuracy(%) 8.86 43.32Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 71.92% - 43.32s 93.13% - 20.25sMax Accuracy-Latency 71.92% - 43.32s 93.13% - 20.25s (a) BBH, LLaMA 29210211212213 Generated tokens010203040506070Accuracy(%) 12.18 296.25Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 29.29% - 296.25s 66.16% - 25.67sMax Accuracy-Latency 29.29% - 296.25s 66.16% - 25.67s (b) GPQA, LLaMA 29210211212213 Generated tokens020406080100Accuracy(%) 6.87 14.26Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 79.38% - 14.26s 97.27% - 14.04sMax Accuracy-Latency 79.38% - 14.26s 97.27% - 14.04s (c) GSM8K, LLaMA 29210211212213 Generated tokens020406080Accuracy(%) 8.5 56.21Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 64.17% - 56.21s 87.05% - 14.34sMax Accuracy-Latency 64.17% - 56.21s 87.05% - 14.34s (d) MMLU, LLaMA 29210211212213 Generated tokens020406080Accuracy(%) 7.21 25.11Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 69.29% - 25.11s 88.69% - 12.03sMax Accuracy-Latency 69.29% - 25.11s 88.69% - 12.03s (e) BBH, Qwen-7B 29210211212213 Generated tokens010203040506070Accuracy(%) 9.79 324.38Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 30.81% - 324.38s 74.75% - | https://arxiv.org/abs/2505.15684v2 |
19.43sMax Accuracy-Latency 30.81% - 324.38s 74.75% - 19.43s (f) GPQA, Qwen-7B 29210211212213 Generated tokens020406080100Accuracy(%) 5.51 10.93Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 88.17% - 10.93s 98.41% - 10.36sMax Accuracy-Latency 88.17% - 10.93s 98.41% - 10.36s (g) GSM8K, Qwen-7B 29210211212213 Generated tokens020406080Accuracy(%) 6.42 46.6Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 60.66% - 46.60s 85.98% - 11.35sMax Accuracy-Latency 60.66% - 46.60s 85.98% - 11.35s (h) MMLU, Qwen-7B 29210211212213 Generated tokens020406080100Accuracy(%) 11.47 90.51Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 83.84% - 90.51s 96.97% - 31.20sMax Accuracy-Latency 83.84% - 90.51s 96.97% - 31.20s (i) BBH, Qwen-14B 29210211212213 Generated tokens020406080Accuracy(%) 16.16 330.31Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 47.98% - 330.31s 82.83% - 42.63sMax Accuracy-Latency 47.98% - 330.31s 82.83% - 42.63s (j) GPQA, Qwen-14B 29210211212213 Generated tokens020406080100Accuracy(%) 9.61 20.74Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 92.12% - 20.74s 98.41% - 15.23sMax Accuracy-Latency 92.12% - 20.74s 98.41% - 15.23s (k) GSM8K, Qwen-14B 29210211212213 Generated tokens020406080Accuracy(%) 11.34 61.84Method Full CoT ThinkLessLatency(s) Max Accuracy-Latency 81.50% - 61.84s 90.75% - 21.81sMax Accuracy-Latency 81.50% - 61.84s 90.75% - 21.81s (l) MMLU, Qwen-14B Figure 5: Top@ kaccuracy of ThinkLess vs.Top@ 1accuracy of DeepSeek-distilled models across datasets and models. We set k=Token Budget 512to match the token usage on par with distilled models. Legends follow Figure 1. 51.1%20.8% 14.1% 13.9%BBH - LLaMA 70.8%8.6% 8.9% 11.7%GSM8k - LLaMA 13.1%12.6%17.2%57.1%GPQA - LLaMA 46.0%18.3%11.5% 24.2%MMLU - LLaMA 50.6%20.0% 20.8% 8.6%BBH - Qwen 82.3% 5.5% 5.0% 7.1%GSM8k - Qwen 15.2%15.7%15.7%53.5%GPQA - Qwen 45.8%18.0% 10.8% 25.4%MMLU - QwenTrue, True True, False False, True False, False Figure 6: Answer overlap between Distill and ThinkLess w/o Instruct. Each pie shows the proportion of “a, b” cases, where “a” is Distill’s results (True or False) and “b” is ThinkLess w/o Instruct’s (True or False). manner. These results demonstrate that ThinkLess offers a highly practical trade-off between reason- ing fidelity and computational efficiency. 5 Conclusion This paper presents ThinkLess , an inference- efficient framework that reduces the overhead ofCoT reasoning without any model modification or additional training. By analyzing attention patterns, we find that final answers rely little on early rea- soning steps—enabling safe early termination via a reasoning terminator token. To preserve answer completeness and format, a lightweight output reg- ulation step is introduced, leveraging the model’s instruction-following ability. Experimental Results show that ThinkLess achieves comparable accuracy to full CoT decoding while significantly lowering token usage and latency, making it a practical and generalizable solution for real-world deployment. 6 Limitations While ThinkLess demonstrates strong efficiency and accuracy trade-offs, several limitations remain: Reliance on Instruction Quality. The suc- cess of ThinkLess depends on the effectiveness of lightweight output regulation instructions. Poorly phrased or overly generic instructions may fail to guide the model toward well-structured outputs, especially for complex or ambiguous tasks. De- signing effective instructions for new tasks may require manual tuning or domain-specific insights. Lack of Dynamic Truncation Strategy. Think- Less currently inserts the < /think > token at fixed positions, without dynamically adapting to the com- plexity of individual questions. For harder tasks requiring deeper reasoning, premature truncation may omit essential content. Developing an adap- tive termination policy that tailors reasoning length to question difficulty remains | https://arxiv.org/abs/2505.15684v2 |
an open direction. Assumption of Internal Reasoning Compres- sion. ThinkLess assumes that LLMs internally compress reasoning into the < /think > token, which may hold for certain instruction-tuned mod- els but not all. Models without strong instruction- following capabilities or those trained with differ- ent prompting formats may not benefit from early termination in the same way, limiting the general- izability of our method. Limited Scalability Validation. Due to com- putational resource constraints, we only evaluate ThinkLess on mid-sized models (7B–14B) and a limited set of reasoning benchmarks. Its perfor- mance on larger foundation models or broader tasks remains to be validated. These limitations also highlight important di- rections for future work. In particular, extending ThinkLess to larger-scale models, more diverse task types, and dynamic truncation policies remains a key focus of our ongoing efforts. 7 Ethical Considerations We use publicly available datasets and model check- points under licenses that permit research use. De- tails about the license terms and usage restrictions are provided in Section 4.1. We ensured that all ar- tifacts were used in accordance with their intendedpurpose as stated by the original providers. References Simon A Aytes, Jinheon Baek, and Sung Ju Hwang. 2025. Sketch-of-thought: Efficient llm reasoning with adaptive cognitive-inspired sketching. arXiv preprint arXiv:2503.05179 . Zhenni Bi, Kai Han, Chuanjian Liu, Yehui Tang, and Yunhe Wang. 2024. Forest-of-thought: Scaling test- time compute for enhancing llm reasoning. arXiv preprint arXiv:2412.09078 . Xingyu Chen, Jiahao Xu, Tian Liang, Zhiwei He, Jianhui Pang, Dian Yu, Linfeng Song, Qiuzhi Liu, Mengfei Zhou, Zhuosheng Zhang, and 1 others. 2024. Do not think that much for 2+ 3=? on the overthinking of o1-like llms. arXiv preprint arXiv:2412.21187 . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, and 1 others. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 . Guhao Feng, Bohang Zhang, Yuntian Gu, Haotian Ye, Di He, and Liwei Wang. 2023. Towards revealing the mystery behind chain of thought: a theoretical per- spective. Advances in Neural Information Processing Systems , 36:70757–70798. Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, and 1 others. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shi- rong Ma, Peiyi Wang, Xiao Bi, and 1 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 . Tingxu Han, Zhenting Wang, Chunrong Fang, Shiyu Zhao, Shiqing Ma, and Zhenyu Chen. 2024. Token-budget-aware llm reasoning. arXiv preprint arXiv:2412.18547 . Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. 2024. Training large language models to reason in a contin- uous latent space. arXiv preprint arXiv:2412.06769 . Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. arXiv preprint arXiv:2009.03300 . Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richard- son, Ahmed El-Kishky, Aiden Low, Alec Helyar, | https://arxiv.org/abs/2505.15684v2 |
Aleksander Madry, Alex Beutel, Alex Carney, and 1 others. 2024. Openai o1 system card. arXiv preprint arXiv:2412.16720 . Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. Advances in neural information processing systems , 35:22199– 22213. Zhihang Lin, Mingbao Lin, Luxi Lin, and Rongrong Ji. 2025. Boosting multimodal large language models with visual tokens withdrawal for rapid inference. InProceedings of the AAAI Conference on Artificial Intelligence , pages 5334–5342. Zichen Liu, Changyu Chen, Wenjun Li, Tianyu Pang, Chao Du, and Min Lin. 2025a. There may not be aha moment in r1-zero-like training—a pilot study. Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, and Min Lin. 2025b. Understanding r1-zero-like training: A criti- cal perspective. arXiv preprint arXiv:2503.20783 . Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, and Chris Callison-Burch. 2023. Faithful chain-of- thought reasoning. In The 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (IJCNLP- AACL 2023) . Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, and Huajun Chen. 2022. Reasoning with lan- guage model prompting: A survey. arXiv preprint arXiv:2212.09597 . Ziran Qin, Yuchen Cao, Mingbao Lin, Wen Hu, Shixuan Fan, Ke Cheng, Weiyao Lin, and Jianguo Li. 2025. Cake: Cascading and adaptive kv cache eviction with layer preferences. arXiv preprint arXiv:2503.12491 . Xiaoye Qu, Yafu Li, Zhaochen Su, Weigao Sun, Jianhao Yan, Dongrui Liu, Ganqu Cui, Daizong Liu, Shuxian Liang, Junxian He, and 1 others. 2025a. A survey of efficient reasoning for large reasoning models: Lan- guage, multimodality, and beyond. arXiv preprint arXiv:2503.21614 . Yuxiao Qu, Matthew YR Yang, Amrith Setlur, Lewis Tunstall, Edward Emanuel Beeching, Ruslan Salakhutdinov, and Aviral Kumar. 2025b. Optimiz- ing test-time compute via meta reinforcement fine- tuning. arXiv preprint arXiv:2503.07572 . David Rein, Betty Li Hou, Asa Cooper Stickland, Jack- son Petty, Richard Yuanzhe Pang, Julien Dirani, Ju- lian Michael, and Samuel R Bowman. 2024. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling . Yi Shen, Jian Zhang, Jieyun Huang, Shuming Shi, Wen- jing Zhang, Jiangze Yan, Ning Wang, Kai Wang, andShiguo Lian. 2025. Dast: Difficulty-adaptive slow- thinking for large reasoning models. arXiv preprint arXiv:2503.04472 . Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Re- flexion: Language agents with verbal reinforcement learning. Advances in Neural Information Process- ing Systems , 36:8634–8652. Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, An- drew Wen, Hanjie Chen, Xia Hu, and 1 others. 2025. Stop overthinking: A survey on efficient rea- soning for large language models. arXiv preprint arXiv:2503.16419 . Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny Zhou, and 1 others. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261 . Qwen Team. 2025. Qwq-32b: Embracing the power of reinforcement | https://arxiv.org/abs/2505.15684v2 |
learning. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems , 30. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 . Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, and 1 others. 2022. Chain-of-thought prompting elic- its reasoning in large language models. Advances in neural information processing systems , 35:24824– 24837. Siwei Wu, Zhongyuan Peng, Xinrun Du, Tuney Zheng, Minghao Liu, Jialong Wu, Jiachen Ma, Yizhi Li, Jian Yang, Wangchunshu Zhou, and 1 others. 2024. A comparative study on reasoning patterns of openai’s o1 model. arXiv preprint arXiv:2410.13639 . Heming Xia, Yongqi Li, Chak Tou Leong, Wenjie Wang, and Wenjie Li. 2025. Tokenskip: Control- lable chain-of-thought compression in llms. arXiv preprint arXiv:2502.12067 . Silei Xu, Wenhao Xie, Lingxiao Zhao, and Pengcheng He. 2025. Chain of draft: Thinking faster by writing less. arXiv preprint arXiv:2502.18600 . An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, and 1 others. 2024. Qwen2. 5 technical report. arXiv preprint arXiv:2412.15115 . Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. 2023a. Tree of thoughts: Deliberate problem solving with large language models. Advances in neural information processing systems , 36:11809–11822. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023b. React: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR) . Hai Ye, Mingbao Lin, Hwee Tou Ng, and Shuicheng Yan. 2024. Multi-agent sampling: Scaling in- ference compute for data synthesis with tree search-based agentic collaboration. arXiv preprint arXiv:2412.17061 . Jintian Zhang, Yuqi Zhu, Mengshu Sun, Yujie Luo, Shuofei Qiao, Lun Du, Da Zheng, Huajun Chen, and Ningyu Zhang. 2025a. Lightthinker: Think- ing step-by-step compression. arXiv preprint arXiv:2502.15589 . Wenqi Zhang, Yongliang Shen, Linjuan Wu, Qiuying Peng, Jun Wang, Yueting Zhuang, and Weiming Lu. 2024. Self-contrast: Better reflection through inconsistent solving perspectives. arXiv preprint arXiv:2401.02009 . Xiaoying Zhang, Da Peng, Yipeng Zhang, Zonghao Guo, Chengyue Wu, Chi Chen, Wei Ke, Helen Meng, and Maosong Sun. 2025b. Will pre-training ever end? a first step toward next-generation founda- tion mllms via self-improving systematic cognition. arXiv preprint arXiv:2503.12303 . Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2022. Automatic chain of thought prompt- ing in large language models. arXiv preprint arXiv:2210.03493 . A Analysis of Output Formatting Issues from Early Termination While ThinkLess is designed to terminate reason- ing early and rely on internalized representations for answer generation, this can occasionally re- sult in malformed outputs—particularly when the model is not explicitly instructed on how to format its final answer. Tables 2 and 3 illustrate common failure cases across different datasets, caused not by flawed reasoning, but by formatting deviations that hinder correct evaluation. GPQA: Outputting the Answer Directly In- | https://arxiv.org/abs/2505.15684v2 |
stead of the Option. In multiple GPQA exam- ples, the model correctly computes the numeri- cal answer ( e.g., “18”, “1+nv n+v”, or “3536”), but fails to select the corresponding multiple-choice option letter ( e.g., “A”, “B”, etc.). This is problem- atic because the task requires choosing from a list, and direct numeric answers—though logically cor- rect—are treated as incorrect under automatic eval- uation scripts. This issue is a direct consequence of skipping the reasoning trace, which would oth- erwise reinforce the expected answer format ( e.g., “The answer is A”). BBH: Verbose or Misaligned Natural Lan- guage Outputs. In BBH, early termination some- times causes the model to output full explanations (e.g., “No, Tamika does not tell the truth”) instead of a concise boolean token like “False” or a lettered choice. In one example, the model responds with an overly verbose phrase: “The statement ‘Return your final response within boxed {}’ is True”—a hallucination likely caused by partial instruction remnants seen during pretraining. These cases re- veal how early truncation may weaken task compli- ance, especially for boolean or classification-based tasks that expect minimal final output. MMLU: Misformatting Algebraic Expres- sions. For math-heavy tasks like MMLU, the model sometimes produces an exactly correct sym- bolic expression ( e.g., “(x+1)(x−2)(x+4)”) that does not match any of the provided answer options verbatim. Though mathematically equivalent to one of the choices, the mismatch in formatting or token order causes the model to be penalized. This highlights the fragility of matching-based evalua- tion when outputs are not explicitly aligned with options. Key Insight: Output Regulation is Essential. These examples demonstrate that output format- ting errors—not reasoning failures—are the domi-nant cause of performance drop in ThinkLess with- out instruction-based regulation. The missing or misaligned final answers occur because the model lacks an explicit signal about how to conclude the response after < /think > is triggered. By contrast, ThinkLess with output regulation prepends a short, task-specific instruction ( e.g., “Select the best option (A, B, C, D):”) that helps the model map internal reasoning to a valid and scorable final output—without increasing token length significantly. This regulation mechanism is crucial for ensuring compatibility with automatic scorers and maintaining downstream performance. B Instruction-based Output Regulation Tables 4 and 5 provides instructions details that reg- ulate the output formatting across different datasets and their subtasks. Dataset Input Output GPQAIf an equimolar mixture X of two liquids, which decol- orizes bromine water, is treated with platinum when heated, then an equimolar mixture Y of two other liq- uids is formed as a result of disproportionation, which does not decolorize bromine water. Hydrogenation of both mixture X and mixture Y in the presence of platinum under rather severe conditions gives only one substance, a certain hydrocarbon Z (mass fraction of hydrogen is 14.28%), which is a constituent of mixture Y and widely used as a solvent. Substance Z does not react further with hydrogen. There are no conjugated multiple bonds in the molecules of the compounds of mixture X. Indicate the total number of hydrogen atoms in two liquids of | https://arxiv.org/abs/2505.15684v2 |
mixture X. A.18 B.22 C.16 D.1218 A light beam is propagating through a glass with index of refraction n. The glass is moving at constant velocity v in the same direction as the beam and toward the observer in laboratory. What is the speed of light in glass relative to the observer in laboratory? Take the speed of light in vacuum c=1. A.1 B.1−nv n+v C.1 +nv n+v D.1 +nv n−v1 +nv n+v A synchrocyclotron is a special type of cyclotron. A proton is injected at t= 0 with negligible kinetic en- ergy, i.e., T≃0, and undergoes two accelerations in the accelerator to reach the kinetic energy T1= 950 MeV. The alternating voltage applied in the accelerator is given by U(t) =U0cos [2 πν(α)t+ Φ 0]. How many revolutions does the proton make in the synchrocyclotron? A. 5300 B. 2500 C. 3536 D. 18643536 Table 2: Input and output cases of early termination from GPQA. Dataset Input Output BBHFidel lies. Millicent says Fidel tells the truth. Crista says Millicent lies. Leda says Crista lies. Tamika says Leda tells the truth. Does Tamika tell the truth?No, Tamika does not tell the truth. False or not False or True or True is The statement "Return your final response within ||." isTrue . The following paragraph describes a set of seven ob- jects arranged in a fixed order. The statements are logically consistent within the paragraph. On a branch, there are seven birds: a robin, a humming- bird, a raven, a blue jay, a crow, a cardinal, and a quail. The hummingbird is to the left of the crow. The cardinal is to the right of the quail. The blue jay is the fourth from the left. The robin is the second from the left. The quail is to the right of the crow. The raven is to the left of the robin. Which bird is the leftmost? A. The crow is the leftmost B. The raven is the leftmost C. The blue jay is the leftmost D. The robin is the leftmost E. The hummingbird is the leftmost F. The cardinal is the leftmost G. The quail is the leftmostTheraven is the leftmost MMLUFind all zeros in the indicated finite field of the given polynomial with coefficients in that field: x5+ 3x3+x2+ 2xinZ5 A.0,1 B.0,4 C.0 D.10, 4 The polynomial x3+ 2x2+ 2x+ 1 can be factored into linear factors in Z7[x]. Find this factorization. A.(x−2)(x+ 2)( x−1) B.(x+ 1)( x+ 4)( x−2) C.(x+ 1)( x−4)(x−2) D.(x−1)(x−4)(x−2)(x+ 1)( x−2)(x+ 4) Table 3: Input and output cases of early termination from BBH and MMLU. Dataset Sub-task Instruction BBHboolean expression Evaluate the given Boolean expression step by step, carefully analyzing each operation and verifying the logic at every stage. Ensure the reasoning process is accurate and consistent. Return the final result as either “True” or “False”. causal judgement Assess whether the stated causal relationship between two events or phenomena is logically valid. Analyze the connection step by step, verify your reasoning at each stage, and base your judgment on evidence, logic, and plausibility. Conclude by providing | https://arxiv.org/abs/2505.15684v2 |
your final answer as “Yes” or “No”. formal fallacies Analyze the given argument to determine whether it is deductively valid. Start by identifying and formalizing the premises and conclusion. Reflect on each step of your evaluation, ensuring the conclusion follows logically and necessarily from the premises without relying on external information or assumptions. Finally, respond with either “valid” or “invalid”. web of lies Based on the statements made by the characters, determine whether the specified character is telling the truth. Analyze the relationships and consistency between the statements step by step, reflect on your reasoning at each stage, and ensure your judgment is logically sound. The final answer should be “Yes” or “No”’ navigate Given the navigation instructions, determine whether you can reach the destination. You can learn to analyze, but the final answer should be “Yes” or “No”. logical deduction seven objects Solve the following logic puzzle to determine the correct order of seven objects based on the given clues. Analyze the clues step by step, reflect on your reasoning at each stage, and systematically eliminate incorrect possibilities. Finally, evaluate all the options (A-G) and select the one that represents the correct answer. ruin names Analyze each option for its humor, creativity, and resemblance to the original name step by step. Reflect on the reasoning process to determine the best choice for each question. Output your answers as a sequence of four letters (A-D), one for each question. temporal sequences Determine the correct order of events from the given choices. For each item, select the correct option (A-D) and output them in order. Table 4: Instruction regulations on BBH Subtasks Dataset Instruction GSM8K Solve the math problem step by step. Give only the final numerical answer. MMLU Given the multiple-choice question above drawn from different academic disciplines, think step by step, self-check your reasoning, and output only the single final option (A, B, C, or D). GPQA You will be given a graduate-level multiple-choice science question. Think step-by-step (LaTeX allowed), self-check, then output one line with only the letter A, B, C, or D. Table 5: Instruction regulations on GSM8K, MMLU and GPQA. | https://arxiv.org/abs/2505.15684v2 |
Thought-Augmented Policy Optimization: Bridging External Guidance and Internal Capabilities Jinyang Wu1∗Chonghua Liao2∗Mingkuan Feng1∗Shuai Zhang1† Zhengqi Wen3Pengpeng Shao3Huazhe Xu245Jianhua Tao13 1Department of Automation, Tsinghua University 2Institution for Interdisciplinary Information Sciences, Tsinghua University 3Beijing National Research Center for Information Science and Technology 4Shanghai Qi Zhi Institute5Shanghai AI Lab {wu-jy23,lch22}@mails.tsinghua.edu.cn , zhang_shuai@mail.tsinghua.edu.cn Abstract Reinforcement learning (RL) has emerged as an effective method for training reasoning models. However, existing RL approaches typically bias the model’s output distribution toward reward-maximizing paths without introducing external knowledge. This limits their exploration capacity and results in a narrower reason- ing capability boundary compared to base models. To address this limitation, we propose TAPO ( Thought- Augmented Policy Optimization), a novel framework that augments RL by incorporating external high-level guidance (“thought patterns”). By adaptively integrating structured thoughts during training, TAPO effectively balances model-internal exploration and external guidance exploitation. Extensive experiments show that our approach significantly outperforms GRPO by 99 % on AIME, 41 %on AMC, and 17 %on Minerva Math. Notably, these high-level thought patterns, abstracted from only 500 prior samples, generalize effectively across various tasks and models. This highlights TAPO’s potential for broader applications across multiple tasks and domains. Our further analysis reveals that introducing external guidance produces powerful reasoning models with superior explainability of inference behavior and enhanced output readability. 1 Introduction Reinforcement learning (RL) has demonstrated remarkable success in enhancing the reasoning ca- pabilities of large language models (LLMs), as exemplified by OpenAI-o1 [ 1], DeepSeek-R1 [ 2], and Kimi-1.5 [ 3]. In contrast to traditional approaches that rely on human-curated annotations [ 4,5], contemporary RL training paradigms [ 2,6] directly optimize base language models using simple, automatically computable reward function. This approach enables models to develop sophisticated Chain-of-Thought (CoT) [ 7] capabilities, and autonomously incentivize advanced reasoning behav- iors, including problem decomposition, self-reflection, and iterative refinement [8, 9]. Recent RL research primarily focuses on enhancing training stability, efficiency and performance through two key aspects: (1) addressing inherent limitations of RL algorithm [ 10–12], such as length bias and KL divergence constraints; and (2) improving data organization and reducing data dependency [ 13–15], such as enabling no-supervision training. Despite these advancements, existing GRPO-based methods typically bias the model’s self-generated output distribution toward reward- maximizing trajectories without incorporating external knowledge. This inherent limitation constrains exploration capacity and results in narrower reasoning capabilities compared to base models [ 9,8]. ∗Equal contributions.arXiv:2505.15692v2 [cs.CL] 26 May 2025 Figure 1: Overall performance across five competition-level benchmarks (MATH-500, AIME 2024, AMC, OlmpiadBench, and Minerva Math). TAPO significantly outperforms existing RL methods, especially on the challenging AIME and AMC benchmark ( ↑99%and↑41%over GRPO). While a very recent and concurrent work LUFFY [ 16] introduces off-policy guidance to enhance on-policy learning, it necessitates supervision signals from a computationally expensive strong policy model (DeepSeek-R1 [ 2] in the paper). Moreover, the substantial capability gap between the external strong policy and the policy model being trained may lead to training instability issues. To address these limitations, we propose TAPO , aThought- Augmented Policy Optimization frame- work for LLM reasoning. Building upon conventional RL methods like GRPO [ 6], | https://arxiv.org/abs/2505.15692v2 |
TAPO introduces high-level thought patterns that effectively bridge external guidance and model internal reasoning capabilities during training. Specifically, we design “thought library”, a general repository storing high-level thought templates abstracted from 500 prior samples. Each template represents an abstract problem-solving strategy for a category of problems and serves as reasoning guidance. For each in- coming question during GRPO sampling, we adaptively identify and apply relevant thought templates from this library to enhance the reasoning process. This dynamic integration of external guidance with internal model abilities enables the system to internalize more generalizable and explainable reasoning behaviors, stabilize model learning, and produce more powerful reasoning models. Extensive experiments demonstrate that TAPO significantly outperforms GRPO across diverse datasets, achieving an average improvement of +12.0 points, including gains of 99% on AIME, 41% on AMC, and 17% on Minerva Math. As shown in Figure 1, our method also surpasses other powerful RL approaches. Moreover, TAPO proves effective across various model scales and architectures while exhibiting strong generalization to out-of-distribution reasoning tasks. Notably, our method can achieve stable learning on Llama3.2-3B-Base, which has been previously documented to struggle with standard GRPO training [ 8,11]. Further analysis confirms that introducing external guidance enhances both model output explainability and readability. Our core contributions are: •Novel RL framework: We propose Thought-Augmented Policy Optimization (TAPO), which enhances model reasoning by integrating external high-level thought guidance. •Remarkable Performance: TAPO significantly outperforms GRPO by 99 %on AIME, 41 %on AMC, and 17 %on Minerva Math, as well as previous powerful RL methods. •Superior Generalization and Enhanced Output Quality: TAPO extends effectively to out-of- distribution tasks, various model types while improving output explainability and readability. 2 Thought-Augmented Policy Optimization In this paper, we aim to investigate RL for LLMs with external guidance, as shown in Figure 2. We first introduce the popular RL method GRPO [ 6] (Section 2.1), then present our extended GRPO framework incorporating high-level thought guidance (Section 2.2), and finally describe how to construct a thought library that provides external guidance for RL training (Section 2.3). 2 Policy Model 𝑜𝑜3𝑜𝑜1 𝑜𝑜2 …𝑞𝑞 𝑟𝑟3𝑟𝑟1 𝑟𝑟2 …𝐴𝐴3𝐴𝐴1 𝐴𝐴2 … Thought -Augmented SolutionReward Calculation Reward Advantage External GuidanceAccuracy FunctionThought -Augmented Policy Optimization (TAPO) Policy OptimizationFigure 2: Flowchart of TAPO: Enhancing policy model capabilities through integration of external guidance (high-level thought patterns). This thought augmentation establishes an optimal balance between model internal exploration and external strategy exploitation. 2.1 Group Relative Policy Optimization (GRPO) LLMs as Markov Decision Processes The generation process of LLMs can be formulated as a token-level Markov Decision Process (MDP) M(S,A, r, pQ)[11,17], where Srepresents states (observation sequences) and Arepresents the action space (vocabulary). At each step t, the state st∈ S consists of the concatenation of the input question qand all tokens generated so far o<t. This state serves as input to the policy model πθ(·|st). Specifically, the policy processes st= (q,o<t) = (q1, q2, . . . , q l, o1, o2, . . . , o t−1), where qidenotes the i-th token of question qandoj,<trepresents the token generated by πθat step j. The policy then samples the next | https://arxiv.org/abs/2505.15692v2 |
token from the vocabulary A. In the RL framework, the entropy-regularized objective [18] to be optimized is: J(πθ) =E q∼pQ E o∼πθ(·|q)[R(q,o)]−β·DKL[πθ(·|q))||πref(·|q)] , (1) where R(q,o) =P|o| t=1r(st, ot)denotes the return [ 17] of the trajectory (q;o),r(·)represents the reward model, and πrefis used to denote a reference policy. The KL regularization term is usually adopted to prevent the policy model πθfrom deviating too far from the reference model πref. GRPO Traditional RL approaches such as Proximal Policy Optimization (PPO) [ 19] employ policy gradient methods to optimize the objective in Equation 1. GRPO [ 2] offers an elegant simplification of PPO by eliminating the need for an additional reward model. It assigns a scalar reward to each trajectory and normalizes these rewards across the group. Specifically, let πθolddenote the policy model before updating. Given an input question qandGoutputs {o1, . . . , oG}generated by πθold, the normalized reward Ai,tis shared across all tokens in oi: Ai,t=r(oi)−mean ({r(oi)|oi∼πθold(·|st), i= 1,2, . . . , G }) std({r(oi)|oi∼πθold(·|st), i= 1,2, . . . , G }). (2) Then, the GRPO objective function is shown below: JGRPO(πθ) =1 GGX i=11 |oi||oi|X t=1{min [ρi,tAi,t,ˆρi,tAi,t]−β·DKL[πθ(·|q))||πref(·|q)]} with probability ratio ρi,t=πθ(oi,t|q,oi,<t) πθold(oi,t|q,oi,<t)and clipped ratio ˆρi,t=clip(ρi,t; 1−ϵ,1 +ϵ). (3) In practical implementations of GRPO, r(·)typically represents a rule-based verifier. 2.2 Extending GRPO with High-Level External Guidance In this subsection, we formally extend GRPO by incorporating high-level external guidance to enhance the model’s internal reasoning capabilities. For multi-step reasoning tasks, it is typically 3 easier for weak models to generate one correct step than to complete the entire reasoning steps in a single inference. We leverage this characteristic through our guidance mechanism. We define guidance to be a function gthat transforms an input question qinto a thought-augmented form qaug. When prompted with guidance, the model generates a partial solution which is then combined with the original question qto create a question with hints qaug. Specifically, for a specific guidance gj, we have qaug j=gj(q). Then, for each augmented question qaug j, we sample a micro group of Gjoutputs oj,1,oj,2, . . . , oj,Gjfrom the old policy model, representing diverse reasoning paths facilitated by the guidance. Section 2.3 provides details of this guidance mechanism. Given multiple diverse guidances g1, ..., g j, the GRPO objective can be reformulated as: ˜JGRPO(πθ) =1 P|g| i=1Gi|g|X i=1GiX j=11 |oi||oi|X t=1{min [ρi,j,tAi,t,ˆρi,j,tAi,t]−β·DKL[πθ||πref]}, probability ratio ρi,j,t=πθ(oi,j,t|qaug j,oi,j,<t) πθold(oi,j,t|qaug j,oi,j,<t), clipped ratio ˆρi,j,t=clip(ρi,j,t; 1−ϵ,1 +ϵ). (4) For simplicity, we denote the objective for each thought guidance as: Ji(πθ) =1 GiGiX j=11 |oi||oi|X t=1{min [ρi,j,tAi,t,ˆρi,j,tAi,t]−β·DKL[πθ(·|q))||πref(·|q)]},(5) then, the extended GRPO objective (Equation 4) can be written as: ˜JGRPO(πθ) =1 P|g| i=1Gi|g|X i=1GiJi(πθ). (6) This formulation can be viewed as a weighted sum of each guidance-specific objective, with Gi serving as the weight for each term. Consequently, we are optimizing the model under multiple diverse high-level thought guidances simultaneously. As demonstrated in DAPO [ 10], prompts yielding binary accuracy values (0 or 1) produce no gradients, impeding learning and reducing sample efficiency. Let pj(≤1)denote the probability of obtaining zero accuracy when sampling from πθold(qaug j). The probability of obtaining | https://arxiv.org/abs/2505.15692v2 |
at least one positive sample in the training group becomes 1−Π|g| ipi(≥1−pj,∀j). Thus, we can infer that training with such grouped samples, guided by more diverse instructions, will lead to more stable model learning. In our implementation, we generate equal outputs for each guidance ( G1=G2=···=G|g|), assigning uniform learning weights. Future work could explore different or dynamic weight allocation strategies based on the learning process. Notably, when |g|= 1and questions remain unaugmented (q=qaug), Equation 4 degenerates to the vanilla GRPO objective (Equation 3). Therefore, GRPO can be viewed as a special case of our more general framework. 2.3 Thought Library and Augmented Reasoning In this subsection, we describe how to construct our thought library, which provides external guidance for RL training in Section 2.2. As illustrated in previous work [ 20,21], humans typically solve complex reasoning tasks by applying universal guidelines (“thought patterns”) induced from similar problems rather than starting from scratch. These high-level thought patterns help address unfamiliar tasks by leveraging previously successful reasoning strategies. Inspired by prior work [ 22,23], we introduce “thought library”, a lightweight hub of high-level thought templates abstracted from just 500 seed data—that adaptively provides relevant thought patterns during GRPO sampling. Thought Library Starting with a small set of seed samples S={s1, . . . , ss}, we employ Monte Carlo Tree Search (MCTS) [ 24–26] to generate solution trees. For each question si∈ S, a predefined action set A={a1, ..., a A}and model π, MCTS build a search tree Tiwhere: the root node represents question si, each edge denotes an action a∈ A, and each child node ncontains partial solutions generated by πunder the corresponding action. A path from the root to a leaf node nj,dforms a solution trajectory tj= (si, aj,1,nj,1, . . . , a j,d,nj,d). Each intermediate node nj,lis generated based on the cumulative context of its parent nodes and the current action, i.e. nj,l= 4 π([si, aj,1,nj,1, . . . , a j,l]). Through this process, we obtain a diverse set of solution traces T= {t1,t2, . . . , tt}. The MCTS algorithm will assign a final reward R(tj|si)to each trace tj∈T. Further MCTS details are provided in Appendix A.1. Through the above process, for each seed question si, we obtain multiple solution traces. To identify the optimal trajectory for si, we utilize a simple path selection metric proposed in HiAR-ICL [23]: Score (si,tj) =b·R(tj|si)−(1−b)·C(tj), (7) Find the distance between (2,1,-4) and (5,8,-3).We ask a subquestion and solve it.What is the difference in the x-coordinates? The difference is $2 - 5 = -3$. QuestionA1: Divide-and-conquer Let’s verify. This step is correct. A2: Reflection Figure 3: Schematic diagram of action- chain-structured solution trajectory.where C(tj)represents trajectory complexity (action count), and b(set to 0.95) balances solution quality against complexity. This scoring function selects trajectories that maximize accuracy while maintaining procedural concise- ness among multiple potential solutions. For each ques- tionsi∈ S, we select the optimal solution trace ti,best that maximizes this score. Since each node in ti,bestcor- responds to an instantiated action ai,l∈ A , we retain the more general action-trace as a | https://arxiv.org/abs/2505.15692v2 |
high-level thought pat- ternTj= (a1, . . . , a d), and aggregate these patterns to construct our thought library L={ˆT1, . . . , ˆTs}. This ag- gregation is guided by the Problem Condition Complexity (PCC) [ 27,28], which represents the number of known prior conditions in qand can be calculated by the model π. Ultimately, each thought template in our library contains both a high-level thought pattern (e.g., a1→a2→a4) and the average PCC of questions sharing this pattern: ˆTj= (PCC Tj, Tj). These templates represent generalized problem-solving strategies for similar problems and serve as the external guidance described in Section 2.2. Detailed implementations are provided in Appendix A.2. Reasoning with Guidance Drawing on meta-reasoning principles [ 29,30], we adaptively identify the most relevant high-level thought patterns for each new problem. During GRPO sampling for a new incoming question qt, we compute its PCC metric and calculate the absolute distance dj=|PCC qt−PCC Tj|for each ˆTj∈ L. We then sort them to obtain the kmost similar templates {ˆTi1, . . . , ˆTik}that best align with the question’s complexity. The thought patterns on these templates, which are sequences of actions, guide the step-by-step reasoning process for question qt. Notably, we use MCTS to build the reasoning tree, abstract it to a high-level thought library, and then match new questions with it. In fact, such a thought library could also be obtained in other ways. For example, human experts could write a general solution approach for each category of problems and then match new questions with existing problem types. We leave this for future work. 3 Experimental Setup Training Datasets To keep the training recipe simple, we select training data exclusively from the training set of MATH [ 31] datasets. Following prior studies [ 32,11], we only use MATH level 3-5 problems for training, yielding 5.5K examples. We randomly sample 500 instances to generate high-level thought patterns, with the remaining 5K examples for training in this work. Evaluation We mainly focus on several widely used math reasoning benchmarks, including MATH500 [ 31], AIME 2024 [ 33], AMC [ 33], Minerva [ 34], OlympiadBench [ 35], GSM8K [ 36], College Math [ 37], and Gaokao23 [ 38]. Since our RL training focus on math reasoning, we further assess the generalization capability on three out-of-distribution benchmarks: GPQA-Diamond [ 39] (science sraduate knowledge), ARC-C [ 40] (open-domain reasoning), and MMLU-Pro [ 41] (questions from academic exams and textbooks). Following common practice and previous work [ 11,13], we use greedy decoding during evaluation. We also limit the sampling budget to 3000 tokens. Baseline Methods We benchmark TAPO with the following baselines on Qwen2.5-Math-7B: (1) GRPO [6], a simplified PPO variant using identical 5k training samples as TAPO; (2) SimpleRL- Zero [32], which applies GRPO to approximately 24k math samples from GSM8K [ 36] and MATH [ 31]; (3) OpenReasoner-Zero [42], employing PPO with 129k samples from diverse sources 5 Table 1: Main results ( %) on five competition-level reasoning benchmarks based on Qwen2.5-Math- 7B-Base. The best results on each benchmark are highlighted in bold . TAPO outperforms | https://arxiv.org/abs/2505.15692v2 |
RL baselines and shows significant gains over GRPO, with relative improvements provided. Model MATH500 ↑AIME24 ↑AMC↑Minerva ↑Olympiad ↑Avg.↑ Qwen2.5-Math [45] 50.8 13.3 42.5 12.1 17.2 27.2 Qwen2.5-Math-Instruct [45] 81.0 13.3 55.0 32.7 38.8 44.1 SimpleRL-Zero [32] 74.6 26.7 60.0 27.6 35.8 44.9 OpenReasoner-Zero [47] 81.0 16.7 57.5 32.7 43.2 46.2 PRIME-Zero [43] 79.0 20.0 60.0 36.4 40.6 47.2 Oat-Zero [11] 79.6 30.0 60.0 34.2 39.9 48.7 LUFFY [16] 83.0 26.7 70.0 32.7 46.2 51.7 GRPO [6] 76.2 16.7 55.0 32.7 38.1 43.8 TAPO (Ours) 83.4 33.3 77.5 38.2 46.2 55.8 △(↑) +9.4% +99.4 % +40.9 % +16.8 % +21.2 % +27.4 % including AIME; (4) PRIME-Zero [43], utilizing implicit process rewards with policy rollouts and out- come labels on 150k NuminaMath [ 33] queries; (5) Oat-Zero [11], which introduces Dr.GRPO to mit- igate length bias, trained on 8k MATH questions; and (6) LUFFY [16], featuring mixed-policy GRPO that incorporates DeepSeek-R1’s outputs, trained on 45k samples from OpenR1-Math-220k [44]. Implementation Details Follow previous work [ 43,32,11], we primarily use Qwen2.5-Math- 7B [45] as the default model. Additionally, we apply TAPO to Qwen2.5-Math-1.5B [ 45], Qwen2.5- Math-1.5B-Instruct [ 45], Qwen2.5-7b-instruct [ 46], Llama-3.2-3B-Instruct [ 5], and Llama-3.1-8B- Instruct [5] to showcase its adaptability across different model scales and types. For RL training, we follow the Open-R1 [ 44] pipeline. Following previous work [ 11,16], we remove the KL loss term by setting β= 0and employ the Dr.GRPO loss. Our training configuration includes a batch size of 128, generating 16 samples per prompt. In our implementation, we set the number of guidance to 2 (i.e., |g|= 2) by default. We generate an equal number of rollouts for each guidance, which means G1=G2= 16/2 = 8 . The reward function is a binary accuracy metric verified by Math-Verify. We train for 500 steps for all experiments. All training experiments are conducted using 8 A100 GPUs. More implementation details are provided in Appendix B. 4 Results and Discussion The section presents the results of TAPO from four aspects: §4.1 Main Results, §4.2 Training Dynamics, §4.3 Ablation Study and Discussion, and §4.4 Case Study. 4.1 Main Results Reasoning Benchmark Performance Following previous works [ 11,16], Table 1 presents the main results across five competition-level reasoning benchmarks. We compare TAPO with multiple representative RL methods, as described in Baseline Methods. To ensure a fair comparison, all baselines are based on Qwen2.5-Math-7B. Our evaluation results reveals three key insights: •TAPO achieves an average score of 55.8, significantly outperforming existing powerful RL methods by a margin of +4.1 points over the best baseline, clearly demonstrating the benefit of integrating high-level external guidance with model internal reasoning capabilities. •On challenging datasets like AMC (+7.5 points over best baseline), TAPO significantly out- performs other methods. By adaptively integrating action-chain structured thought patterns during GRPO training, our method enables more nuanced problem decomposition, generates higher-quality training samples, and thus facilitates more effective model learning. •Compared to its GRPO counterpart, TAPO consistently outperforms across all benchmarks, achieving a notable improvement of +12.0 points on average. This consistent performance gain provides a more robust and effective | https://arxiv.org/abs/2505.15692v2 |
alternative for RL training. 6 Table 2: Accuracy ( %) results of different LLMs across eight benchmarks. The best results in each box are highlighted in bold. We provide the relative improvement of our method compared to GRPO. Method AIME24 ↑AMC↑MATH500 ↑GSM8K ↑Minerva ↑Olympiad ↑CollegeMath ↑Gaokao23 ↑ Avg.↑ Qwen2.5-1.5B-Math [45] CoT 10.0 42.5 59.0 74.6 24.3 27.6 39.5 49.6 40.9 GRPO 13.3 40.0 66.4 74.7 25.0 30.1 40.5 52.7 42.8 Ours 16.7 55.0 69.0 84.2 31.6 33.6 47.3 54.8 49.0 △(↑) +25.6 % +37.5 % +3.9% +12.7 % +26.4 % +11.7 % +16.8 % +4.0% +14.5 % Qwen2.5-1.5B-Math-Instruct [45] CoT 6.7 47.5 68.2 76.8 28.3 36.9 47.1 63.1 46.8 GRPO 13.3 52.5 76.8 85.9 28.3 36.7 45.9 65.2 50.6 Ours 16.7 55.0 76.0 86.5 29.4 39.7 48.3 65.7 52.2 △(↑) +25.6 % +4.8% -1.0% +1.0% +3.9% +8.2% +5.3% +1.0% +3.2% Qwen2.5-7B-Instruct [46] CoT 13.3 47.5 73.2 90.0 30.5 38.8 46.9 64.2 50.5 GRPO 13.3 57.5 76.6 90.1 32.4 36.1 44.5 62.9 51.6 Ours 16.7 67.5 78.0 91.5 36.8 40.6 50.6 65.2 55.8 △(↑) +25.6 % +17.4 % +1.9% +1.6% +13.6 % +12.5 % +13.7 % +3.7% +8.1% Llama-3.2-3B-Instruct [5] CoT 6.7 20.0 38.3 69.3 11.8 12.6 23.8 33.5 27.6 GRPO 3.3 25.0 47.8 75.2 17.6 14.5 34.1 40.8 32.2 Ours 6.7 27.5 48.8 78.8 18.4 16.0 32.5 43.1 34.0 △(↑) +103.0 % +10.0 % +2.1% +4.8% +4.6% +10.4 % -4.7% +5.7% +5.6% Llama-3.1-8B-Instruct [5] CoT 3.3 20.0 36.6 77.2 16.2 15.9 13.3 29.9 26.5 GRPO 3.3 22.5 45.0 82.9 21.0 16.1 31.7 40.8 32.9 Ours 6.7 30.0 52.2 85.2 26.8 17.3 34.1 42.6 36.7 △(↑) +103.0 % +33.3 % +16.0 % +2.8% +27.7 % +7.5% +7.6% +4.5% +11.6 % Out-of-Distribution Generalization Recent studies have highlighted the critical impact of distribu- tional bias on LLMs’ reliability [ 48,49]. Despite impressive in-distribution (ID) performance, these models substantially underperform when confronted with out-of-distribution (OOD) data [ 50,51]. To assess TAPO’s OOD generalization capabilities, we evaluate on three challenging benchmarks: ARC-C, GPQA-Diamond, and MMLU-Pro. Given that all compared methods were trained on mathe- matical data, this setup provides a robust OOD evaluation. As shown in Figure 4, TAPO outperforms GRPO by 13.7 %on average across OOD tasks. These results highlight TAPO’s effectiveness in using high-level external guidance to enhance OOD generalization. +8.0% +21.1%+17.6%+13.7% Figure 4: Results on three out-of-distribution benchmark datasets (Qwen2.5-Math-7B-Base).Extension to More Models To demonstrate TAPO’s effectiveness across different scales and model types, we extend TAPO to several weaker models: Qwen2.5-1.5B-Math, Qwen2.5-1.5B- Math-Instruct, Qwen2.5-7B-Instruct, Llama-3.2- 3B-Instruct and Llama-3.1-8B-Instruct. As shown in Table 2, TAPO achieves significant im- provements across all models. Taking Qwen2.5- 1.5B-Math as an example, TAPO achieves an average improvement of 14.5 %. A simi- lar trend is also observed on Qwen2.5-1.5B- Math-Instruct, Qwen2.5-7B-Instruct, Llama-3.2- 3B-Instruct, and Llama-3.1-8B-Instruct, where TAPO achieves improvements of 3.2 %, 8.1%, 5.6%, and 11.6 %, respectively. 4.2 Training Dynamics In this section, we aim to explore the behavioral differences between TAPO and GRPO through the training reward curves. We conduct experiments on Qwen-2.5-7B-Math-Base and Llama3.2-3B-Base. More Stable Model Learning As illustrated in Figure 5, TAPO consistently achieves | https://arxiv.org/abs/2505.15692v2 |
higher overall training rewards than GRPO for both models. While this advantage appears modest for Qwen2.5- 7 0100200300400500 Steps0.40.50.60.70.8Training Rewards GRPO TAPO(a) Qwen2.5-Math-7B-Base 0255075100125150175200 Steps0.00.10.20.30.40.5Training Rewards GRPO TAPO (b) Llama-3.2-3B-Base Figure 5: Training Reward Curve on Qwen2.5-Math-7B-Base and Llama-3.2-3B-Base. 020406080100120140 Steps0.20.30.40.50.60.70.8Training Rewards |g|=1 |g|=2 |g|=4 |g|=8 (a) Training Reward Curve AIME24 MATH500 OlympiadMinervaAMC (b) Evaluation Performance Figure 6: Ablation results with different numbers of thought patterns (external guidance). Math-7B-Base, it becomes substantially more pronounced when using the weaker Llama3.2-3B-Base. Notably, GRPO tends to collapse during later training stages. This observation aligns with previous findings [ 8,11], which demonstrate that directly applying GRPO to weak models like Llama3.2-3B- Base frequently leads to training collapse across both mathematical tasks and reasoning games. In contrast, TAPO maintains stable training dynamics even with such weak models. The enhanced stability of TAPO stems from its strategic integration of thought patterns into the sampling process. By breaking down complex problems into manageable subtasks, TAPO provides explicit external guidance. This equips weaker models with advanced task decomposition capabilities usually exclusive to stronger models. As a result, weaker models can learn from difficult examples that typically fail to produce positive outcomes under standard GRPO training. As shown in Section 2, this mechanism directly contributes to enhanced training stability and robust model performance. 4.3 Ablation Study and Discussion We investigate the effect of external guidance quantity ( |g|= 1,2,4,8) on training dynamics with fixed total rollouts. Figure 6 shows training rewards and evaluation results across varying thought pattern quantities. For clearer visualization, we uniformly adjust performance on AMC and MATH by subtracting a fixed value without affecting conclusions. We observe that more diverse guidance (|g|= 1,2,4) typically yields higher training rewards. However, when guidance becomes excessive (|g|= 8), performance declines slightly while still outperforming minimal guidance ( |g|= 1). This may be because: (1) Smaller |g|makes it harder to sample positive examples; (2) Larger |g| increases the likelihood of sampling positive examples, but excessive quantities can overwhelm the 8 The sides of triangle are in the ratio of . Segment is the angle bisector drawn to the shortest side, dividing it into segments and . What is the length, in inches, of the longer subsegment of side if the length of side is inches? Express your answer as a common fraction.CAB2:3:4BDADDCACAC10 { pointed-to-… of móg expressions). To solve this problem, we will use Angle Bisector Theorem.Given the sides of triangle are in the ratio of , let's denote …We know that , so we can determine … Let's verify this with Python code. ```python# Given ratio of sidesratio FTP = [2, 3, 4]# Lengths of other sidesAB_length = ratioisser 通常是_ oViewред другихなんと…near 酰Onewoux model reze 🎎im⼀年多its ```python```Therefore, the length is .CAB2:3:4CA=10xDC=4k=407 407 Let's think step by step. We ask a subquestion and solve it in each step. Question GRPO TAPO1. We ask: What is the length of side ?ACSince the sides of triangle are in the ratio of…CAB 3. Finally, we ask what the length of the longer subsegment of side is?ACThe length of the longer subsegment of side is inches.The | https://arxiv.org/abs/2505.15692v2 |
answer is: .AC407407 Bad readability2. Next, we ask: What is the length of the longer subsegment of side ?ACSince $BD$ is the angle bisector and and …The longer subsegment is inches.ADDCDC=407 Figure 7: Comparison of GRPO and TAPO for a simple geometry problem from the MATH dataset. model’s limited learning capacity with too many patterns. In other words, when pattern quantity becomes excessive, the model faces too many learning tasks simultaneously, exceeding its processing capabilities. Therefore, designing an effective thought-augmented framework requires balancing pattern quantity and model capacity to ensure stable learning while maintaining generalization ability. Evaluation performance in Figure 6(b) follows a similar trend, highlighting the importance of balancing guidance quantity with model capacity. 4.4 Case Study We compare the reasoning processes of TAPO and GRPO on a geometry problem in Figure 7. GRPO produces less readable outputs with reasoning interspersed with code and inconsistent language. In contrast, TAPO first identifies its solution strategy (e.g., divide-and-conquer), then systematically addresses each subproblem with clear solutions. This demonstrates how TAPO training enhances both the readability and interpretability of the model’s reasoning process. 5 Related Work RL for LLMs Recent advances in LLM reasoning, such as OpenAI-o1 [ 1], DeepSeek-R1 [ 2], and Kimi-k1.5 [ 3], have shifted focus from Chain-of-Thought (CoT) [ 7] and supervised fine-tuning (SFT) [ 52,53] to reinforcement learning (RL). Contemporary research have primarily focused on: (1) addressing inherent limitations of GRPO [ 10–12], such as length bias and KL divergence constraints; and (2) improving data organization and reducing data dependency [ 13–15]. However, these methods typically bias the model’s output distribution toward reward-maximizing paths without introducing external knowledge, narrowing reasoning capabilities compared to base models [ 9]. While the recent concurrent work LUFFY [ 16] introduces off-policy guidance to enhance on-policy learning, it still relies on supervision signals from a strong policy (DeepSeek-R1). Moreover, the substantial capability gap between the external strong policy and the policy model being trained potentially increases training instability. In contrast, TAPO incorporates external high-level thought guidance to augment the model’s intrinsic capabilities without strong policy. By integrating diverse thought patterns, TAPO enables more stable learning and enhanced reasoning performance. Reasoning with Guidance A common approach to enhancing model response quality involves augmenting input questions with external prompts [ 54–58]. This methodology has been widely applied to reasoning tasks with varying implementation strategies. Some research adaptively searches for suitable exemplars to perform few-shot COT prompting [ 59], while others focus on decomposing complex reasoning tasks into simpler, sequential subtasks [ 60,61]. The partial solutions derived 9 through few-shot prompting or task decomposition are subsequently concatenated with the original problem as guiding hints, thereby reducing problem complexity. However, these approaches typically necessitate meticulous prompt design and exhibit strong dependencies on example quality. Although recent works have advanced from specific examples toward more abstract high-level thought pat- terns [ 22,23,62], they primarily enhance the reasoning capabilities of fixed models through guidance. Moreover, few studies have investigated how to effectively integrate high-level guidance with RL training paradigms. Our work bridges this gap by introducing external abstract problem-solving guidance | https://arxiv.org/abs/2505.15692v2 |
into RL training, achieving superior performance while maintaining flexibility. 6 Conclusion In this paper, we introduce TAPO (Thought-Augmented Policy Optimization), a novel RL framework that addresses fundamental limitations in current approaches for training reasoning models. By incorporating external high-level thought patterns into policy optimization, TAPO effectively bridges model-internal exploration with structured external guidance. Unlike conventional methods that merely bias toward reward-maximizing trajectories, our approach adaptively integrates abstract reasoning strategies during training, enhancing model capabilities across diverse problems. Extensive experiments demonstrate TAPO’s significant improvements over GRPO, with gains of 99 %on AIME, 40%on AMC, and 17 %on Minerva Math. Our method maintains effectiveness across various model scales and architectures, including weak models that typically struggle with standard GRPO training. Moreover, TAPO produces models with enhanced output explainability and readability. These results establish TAPO as a promising direction for developing more powerful, generalizable, and interpretable reasoning systems, opening avenues for future research on integrating high-level thought patterns into model training across broader reasoning domains. 10 References [1]A. Jaech, A. Kalai, A. Lerer, A. Richardson, A. El-Kishky, A. Low, A. Helyar, A. Madry, A. Beutel, A. Carney, et al. , “Openai o1 system card,” arXiv preprint arXiv:2412.16720 , 2024. [2]D. Guo, D. Yang, H. Zhang, J. Song, R. Zhang, R. Xu, Q. Zhu, S. Ma, P. Wang, X. Bi, et al. , “Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning,” arXiv preprint arXiv:2501.12948 , 2025. [3]K. Team, A. Du, B. Gao, B. Xing, C. Jiang, C. Chen, C. Li, C. Xiao, C. Du, C. Liao, et al. , “Kimi k1. 5: Scaling reinforcement learning with llms,” arXiv preprint arXiv:2501.12599 , 2025. [4]J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat, et al. , “Gpt-4 technical report,” arXiv preprint arXiv:2303.08774 , 2023. [5]A. Grattafiori, A. Dubey, A. Jauhri, A. Pandey, A. Kadian, A. Al-Dahle, A. Letman, A. Mathur, A. Schelten, A. Vaughan, et al. , “The llama 3 herd of models,” arXiv preprint arXiv:2407.21783 , 2024. [6]Z. Shao, P. Wang, Q. Zhu, R. Xu, J. Song, X. Bi, H. Zhang, M. Zhang, Y . Li, Y . Wu, et al. , “Deepseekmath: Pushing the limits of mathematical reasoning in open language models,” arXiv preprint arXiv:2402.03300 , 2024. [7]J. Wei, X. Wang, D. Schuurmans, M. Bosma, F. Xia, E. Chi, Q. V . Le, D. Zhou, et al. , “Chain-of- thought prompting elicits reasoning in large language models,” Advances in neural information processing systems , vol. 35, pp. 24824–24837, 2022. [8]K. Gandhi, A. Chakravarthy, A. Singh, N. Lile, and N. D. Goodman, “Cognitive behaviors that enable self-improving reasoners, or, four habits of highly effective stars,” arXiv preprint arXiv:2503.01307 , 2025. [9]Y . Yue, Z. Chen, R. Lu, A. Zhao, Z. Wang, S. Song, and G. Huang, “Does reinforcement learning really incentivize reasoning capacity in llms beyond the base model?,” arXiv preprint arXiv:2504.13837 , 2025. [10] Q. Yu, Z. Zhang, R. Zhu, Y . Yuan, X. Zuo, Y . Yue, T. Fan, G. Liu, L. Liu, X. Liu, et al. , “Dapo: An open-source llm | https://arxiv.org/abs/2505.15692v2 |
reinforcement learning system at scale,” arXiv preprint arXiv:2503.14476 , 2025. [11] Z. Liu, C. Chen, W. Li, P. Qi, T. Pang, C. Du, W. S. Lee, and M. Lin, “Understanding r1-zero-like training: A critical perspective,” arXiv preprint arXiv:2503.20783 , 2025. [12] Z. Lin, M. Lin, Y . Xie, and R. Ji, “Cppo: Accelerating the training of group relative policy optimization-based reasoning models,” arXiv preprint arXiv:2503.22342 , 2025. [13] Y . Zuo, K. Zhang, S. Qu, L. Sheng, X. Zhu, B. Qi, Y . Sun, G. Cui, N. Ding, and B. Zhou, “Ttrl: Test-time reinforcement learning,” arXiv preprint arXiv:2504.16084 , 2025. [14] A. Zhao, Y . Wu, Y . Yue, T. Wu, Q. Xu, Y . Yue, M. Lin, S. Wang, Q. Wu, Z. Zheng, and G. Huang, “Absolute zero: Reinforced self-play reasoning with zero data,” 2025. [15] Y . Wang, Q. Yang, Z. Zeng, L. Ren, L. Liu, B. Peng, H. Cheng, X. He, K. Wang, J. Gao, et al. , “Reinforcement learning for reasoning in large language models with one training example,” arXiv preprint arXiv:2504.20571 , 2025. [16] J. Yan, Y . Li, Z. Hu, Z. Wang, G. Cui, X. Qu, Y . Cheng, and Y . Zhang, “Learning to reason under off-policy guidance,” arXiv preprint arXiv:2504.14945 , 2025. [17] R. S. Sutton, A. G. Barto, et al. ,Reinforcement learning: An introduction , vol. 1. MIT press Cambridge, 1998. [18] J. Schulman, X. Chen, and P. Abbeel, “Equivalence between policy gradients and soft q-learning,” arXiv preprint arXiv:1704.06440 , 2017. 11 [19] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347 , 2017. [20] D. Kahneman, Thinking, Fast and Slow . New York, NY: Farrar, Straus and Giroux, 2011. [21] J. D. K. Ongchoco, J. Knobe, and J. Jara-Ettinger, “People’s thinking plans adapt to the problem they’re trying to solve,” Cognition , vol. 243, p. 105669, 2024. [22] L. Yang, Z. Yu, T. Zhang, S. Cao, M. Xu, W. Zhang, J. E. Gonzalez, and B. Cui, “Buffer of thoughts: Thought-augmented reasoning with large language models,” Advances in Neural Information Processing Systems , vol. 37, pp. 113519–113544, 2024. [23] J. Wu, M. Feng, S. Zhang, F. Che, Z. Wen, and J. Tao, “Beyond examples: High-level automated reasoning paradigm in in-context learning via mcts,” arXiv preprint arXiv:2411.18478 , 2024. [24] L. Kocsis and C. Szepesvári, “Bandit based monte-carlo planning,” in European conference on machine learning , pp. 282–293, Springer, 2006. [25] Z. Qi, M. Ma, J. Xu, L. L. Zhang, F. Yang, and M. Yang, “Mutual reasoning makes smaller llms stronger problem-solvers,” arXiv preprint arXiv:2408.06195 , 2024. [26] X. Guan, L. L. Zhang, Y . Liu, N. Shang, Y . Sun, Y . Zhu, F. Yang, and M. Yang, “rstar- math: Small llms can master math reasoning with self-evolved deep thinking,” arXiv preprint arXiv:2501.04519 , 2025. [27] F.-L. Lee and R. Heyworth, “Problem complexity: A measure of problem difficulty in algebra by using computer,” EDUCATION JOURNAL-HONG KONG-CHINESE UNIVERSITY OF HONG KONG- , vol. 28, no. 1, pp. 85–108, 2000. [28] S. E. Embretson and R. | https://arxiv.org/abs/2505.15692v2 |
C. Daniel, “Understanding and quantifying cognitive complexity level in mathematical problem solving items,” Psychology Science , vol. 50, no. 3, p. 328, 2008. [29] S. Russell and E. Wefald, “Principles of metareasoning,” Artificial intelligence , vol. 49, no. 1-3, pp. 361–395, 1991. [30] C. N. De Sabbata, T. R. Sumers, and T. L. Griffiths, “Rational metareasoning for large language models,” arXiv preprint arXiv:2410.05563 , 2024. [31] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt, “Measuring mathematical problem solving with the MATH dataset,” in Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2) , 2021. [32] W. Zeng, Y . Huang, Q. Liu, W. Liu, K. He, Z. Ma, and J. He, “Simplerl-zoo: Investigating and taming zero reinforcement learning for open base models in the wild,” arXiv preprint arXiv:2503.18892 , 2025. [33] J. Li, E. Beeching, L. Tunstall, B. Lipkin, R. Soletskyi, S. Huang, K. Rasul, L. Yu, A. Q. Jiang, Z. Shen, et al. , “Numinamath: The largest public dataset in ai4maths with 860k pairs of com- petition math problems and solutions.” https://huggingface.co/datasets/Numinamath , 2024. Hugging Face repository, 13:9. [34] A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V . Ramasesh, A. Slone, C. Anil, I. Schlag, T. Gutman-Solo, et al. , “Solving quantitative reasoning problems with language models,” Advances in Neural Information Processing Systems , vol. 35, pp. 3843–3857, 2022. [35] C. He, R. Luo, Y . Bai, S. Hu, Z. Thai, J. Shen, J. Hu, X. Han, Y . Huang, Y . Zhang, et al. , “Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems,” in Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pp. 3828–3850, 2024. [36] K. Cobbe, V . Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, et al. , “Training verifiers to solve math word problems,” arXiv preprint arXiv:2110.14168 , 2021. 12 [37] Z. Tang, X. Zhang, B. Wang, and F. Wei, “Mathscale: Scaling instruction tuning for mathemati- cal reasoning,” arXiv preprint arXiv:2403.02884 , 2024. [38] X. Zhang, C. Li, Y . Zong, Z. Ying, L. He, and X. Qiu, “Evaluating the performance of large language models on gaokao benchmark,” arXiv preprint arXiv:2305.12474 , 2023. [39] D. Rein, B. L. Hou, A. C. Stickland, J. Petty, R. Y . Pang, J. Dirani, J. Michael, and S. R. Bowman, “GPQA: A graduate-level google-proof q&a benchmark,” in First Conference on Language Modeling , 2024. [40] P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord, “Think you have solved question answering? try arc, the ai2 reasoning challenge,” arXiv:1803.05457v1 , 2018. [41] Y . Wang, X. Ma, G. Zhang, Y . Ni, A. Chandra, S. Guo, W. Ren, A. Arulraj, X. He, Z. Jiang, et al. , “Mmlu-pro: A more robust and challenging multi-task language understanding benchmark,” arXiv preprint arXiv:2406.01574 , 2024. [42] J. Hu, Y . Zhang, Q. Han, D. Jiang, X. Zhang, and H.-Y . | https://arxiv.org/abs/2505.15692v2 |
Shum, “Open-reasoner-zero: An open source approach to scaling up reinforcement learning on the base model,” arXiv preprint arXiv:2503.24290 , 2025. [43] G. Cui, L. Yuan, Z. Wang, H. Wang, W. Li, B. He, Y . Fan, T. Yu, Q. Xu, W. Chen, et al. , “Process reinforcement through implicit rewards,” arXiv preprint arXiv:2502.01456 , 2025. [44] H. Face, “Open r1: A fully open reproduction of deepseek-r1,” January 2025. [45] A. Yang, B. Zhang, B. Hui, B. Gao, B. Yu, C. Li, D. Liu, J. Tu, J. Zhou, J. Lin, K. Lu, M. Xue, R. Lin, T. Liu, X. Ren, and Z. Zhang, “Qwen2.5-math technical report: Toward mathematical expert model via self-improvement,” 2024. [46] Qwen Team, “Qwen2.5: A party of foundation models,” September 2024. [47] J. Hu, Y . Zhang, Q. Han, D. Jiang, X. Zhang, and H.-Y . Shum, “Open-reasoner-zero: An open source approach to scaling up reinforcement learning on the base model,” 2025. [48] L. Yuan, Y . Chen, G. Cui, H. Gao, F. Zou, X. Cheng, H. Ji, Z. Liu, and M. Sun, “Revisiting out-of-distribution robustness in NLP: Benchmarks, analysis, and LLMs evaluations,” in Thirty- seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track , 2023. [49] J. Wang, X. Hu, W. Hou, H. Chen, R. Zheng, Y . Wang, L. Yang, W. Ye, H. Huang, X. Geng, B. Jiao, Y . Zhang, and X. Xie, “On the robustness of chatgpt: An adversarial and out-of- distribution perspective,” IEEE Data Eng. Bull. , vol. 47, no. 1, pp. 48–62, 2024. [50] L. Berglund, M. Tong, M. Kaufmann, M. Balesni, A. C. Stickland, T. Korbak, and O. Evans, “The reversal curse: LLMs trained on “a is b” fail to learn “b is a”,” in The Twelfth International Conference on Learning Representations , 2024. [51] H. Yang, Y . Zhang, J. Xu, H. Lu, P.-A. Heng, and W. Lam, “Unveiling the generalization power of fine-tuned large language models,” in Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers) , (Mexico City, Mexico), pp. 884–899, Association for Computational Linguistics, June 2024. [52] C. Li, W. Wang, J. Hu, Y . Wei, N. Zheng, H. Hu, Z. Zhang, and H. Peng, “Common 7b language models already possess strong math capabilities,” arXiv preprint arXiv:2403.04706 , 2024. [53] E. Yeo, Y . Tong, X. Niu, G. Neubig, and X. Yue, “Demystifying long chain-of-thought reasoning in LLMs,” in ICLR 2025 Workshop on Navigating and Addressing Data Problems for Foundation Models , 2025. [54] A. Asai, S. Min, Z. Zhong, and D. Chen, “Retrieval-based language models and applications,” inProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts) , pp. 41–46, 2023. 13 [55] G. Mialon, R. Dessì, M. Lomeli, C. Nalmpantis, R. Pasunuru, R. Raileanu, B. Rozière, T. Schick, J. Dwivedi-Yu, A. Celikyilmaz, et al. , “Augmented language models: a survey,” arXiv preprint arXiv:2302.07842 , 2023. [56] W. Shi, S. Min, M. Yasunaga, M. Seo, R. James, M. Lewis, L. Zettlemoyer, and W.-t. Yih, “Replug: Retrieval-augmented | https://arxiv.org/abs/2505.15692v2 |
black-box language models,” arXiv preprint arXiv:2301.12652 , 2023. [57] Y . Gao, Y . Xiong, X. Gao, K. Jia, J. Pan, Y . Bi, Y . Dai, J. Sun, H. Wang, and H. Wang, “Retrieval- augmented generation for large language models: A survey,” arXiv preprint arXiv:2312.10997 , vol. 2, p. 1, 2023. [58] P. Zhao, H. Zhang, Q. Yu, Z. Wang, Y . Geng, F. Fu, L. Yang, W. Zhang, J. Jiang, and B. Cui, “Retrieval-augmented generation for ai-generated content: A survey,” arXiv preprint arXiv:2402.19473 , 2024. [59] Z. Zhang, A. Zhang, M. Li, and A. Smola, “Automatic chain of thought prompting in large language models,” arXiv preprint arXiv:2210.03493 , 2022. [60] D. Zhou, N. Schärli, L. Hou, J. Wei, N. Scales, X. Wang, D. Schuurmans, C. Cui, O. Bousquet, Q. Le, et al. , “Least-to-most prompting enables complex reasoning in large language models,” arXiv preprint arXiv:2205.10625 , 2022. [61] T. Khot, H. Trivedi, M. Finlayson, Y . Fu, K. Richardson, P. Clark, and A. Sabharwal, “Decomposed prompting: A modular approach for solving complex tasks,” arXiv preprint arXiv:2210.02406 , 2022. [62] L. Yang, Z. Yu, B. Cui, and M. Wang, “Reasonflux: Hierarchical llm reasoning via scaling thought templates,” arXiv preprint arXiv:2502.06772 , 2025. [63] G. Chaslot, S. Bakkes, I. Szita, and P. Spronck, “Monte-carlo tree search: A new framework for game ai,” in Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment , vol. 4, pp. 216–217, 2008. [64] W. Ye, S. Liu, T. Kurutach, P. Abbeel, and Y . Gao, “Mastering atari games with limited data,” in Advances in Neural Information Processing Systems (M. Ranzato, A. Beygelzimer, Y . Dauphin, P. Liang, and J. W. Vaughan, eds.), vol. 34, pp. 25476–25488, Curran Associates, Inc., 2021. [65] A. Zhou, K. Yan, M. Shlapentokh-Rothman, H. Wang, and Y .-X. Wang, “Language agent tree search unifies reasoning, acting, and planning in language models,” in Forty-first International Conference on Machine Learning , 2024. [66] H. Dong, Z. Ding, and S. Zhang, Deep Reinforcement Learning: Fundamentals, Research and Applications , vol. 1 of eBook Packages: Mathematics and Statistics . Springer Singapore, 1 ed., 2020. [67] P. I. Jaffe, R. A. Poldrack, R. J. Schafer, and et al., “Modelling human behaviour in cognitive tasks with latent dynamical systems,” Nature Human Behaviour , vol. 7, pp. 986–1000, 2023. [68] S. Da Silva, “System 1 vs. system 2 thinking,” Psych , vol. 5, no. 4, pp. 1057–1076, 2023. [69] Y . Qin, X. Li, H. Zou, Y . Liu, S. Xia, Z. Huang, Y . Ye, W. Yuan, H. Liu, Y . Li, et al. , “O1 replication journey: A strategic progress report–part 1,” arXiv preprint arXiv:2410.18982 , 2024. [70] L. Kocsis and C. Szepesvári, “Bandit based monte-carlo planning,” in Machine Learning: ECML 2006 (J. Fürnkranz, T. Scheffer, and M. Spiliopoulou, eds.), (Berlin, Heidelberg), pp. 282–293, Springer Berlin Heidelberg, 2006. [71] X. Wang, J. Wei, D. Schuurmans, Q. V . Le, E. H. Chi, S. Narang, A. Chowdhery, and D. Zhou, “Self-consistency improves chain of thought reasoning in language models,” in The Eleventh International Conference on | https://arxiv.org/abs/2505.15692v2 |
Learning Representations , 2023. 14 Technical Appendix of TAPO The supplementary material provides in-depth insights into our TAPO method, covering additional algorithm details (§A), experimental details (§B), and case study (§C). The appendix is organized as follows: Contents 1 Introduction 1 2 Thought-Augmented Policy Optimization 2 2.1 Group Relative Policy Optimization (GRPO) . . . . . . . . . . . . . . . . . . . . 3 2.2 Extending GRPO with High-Level External Guidance . . . . . . . . . . . . . . . . 3 2.3 Thought Library and Augmented Reasoning . . . . . . . . . . . . . . . . . . . . . 4 3 Experimental Setup 5 4 Results and Discussion 6 4.1 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 4.2 Training Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.3 Ablation Study and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 4.4 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 5 Related Work 9 6 Conclusion 10 A More Details about TAPO 16 A.1 Monte Carlo Tree Search (MCTS) . . . . . . . . . . . . . . . . . . . . . . . . . . 16 A.2 Thought Library Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 A.3 Adaptive Retrieval and Instantiation of Thought Patterns . . . . . . . . . . . . . . 18 B Experimental Details 18 C Case Study 18 D Future Directions 19 15 A More Details about TAPO In this section, we provide a comprehensive elaboration of the TAPO algorithm’s technical details. We further describe the specific implementation of the Monte Carlo Tree Search algorithm, the construction process of our thought library, and the adaptive retrieval and instantiation mechanisms for thought patterns during reasoning. A.1 Monte Carlo Tree Search (MCTS) As a heuristic search algorithm, MCTS has demonstrated remarkable success in complex reasoning and decision-making environments [ 63–65]. The algorithm conceptualizes search spaces as tree structures and has achieved significant breakthroughs across various domains, most notably in game- playing AI such as AlphaGo and AlphaZero [ 66]. As described in Section | https://arxiv.org/abs/2505.15692v2 |
2.3 in the main text, we employ MCTS to generate solution trees based on a small set of 500 seed samples. To implement MCTS effectively, we first define a predefined action set. Understanding human complex reasoning is crucial for modeling cognitive processes [ 67]. Existing studies distinguish between two cognitive systems: System 1 and System 2 [ 20,68]. While “System 1” represents fast, intuitive, yet error-prone thinking, “System 2” involves slow, deliberative thinking with superior performance. With the emergence of advanced models like OpenAI’s o1, developing efficient “System 2” approaches to emulate human cognitive processes has gained significant research attention [ 69,2]. Inspired by this and following previous work [ 23,25], we introduce five human-like reasoning actions to bridge the gap between model reasoning and human cognition: •Divide and Conquer (DC, a1): Approaching complex problems by breaking them into manageable sub-problems for easier resolution. •Self-Reflection (SR, a2): Assessing and refining prior solutions during the reasoning process to ensure correctness. •System Analysis (SA, a3): Analyzing the overall structure of the problem and identifying the constraints and conditions before addressing it, thereby clarifying task requirements effectively. •One-Step Thought (OST, a4): Aiming to address a single aspect of the problem through a focused and concise reasoning step. •Chain-of-Thought (CoT, a5): Adopting a sequential reasoning process that builds a series of connected logical steps. Based on the above predefined action set A={a1, ..., a A}and model π, for each question si∈ S, MCTS builds a search tree Tiwhere: the root node represents question si, each edge denotes an action a∈ A, and each child node ncontains partial solutions generated by πunder the corresponding action. A path from the root to a leaf node nj,dforms a solution trajectory tj= (si, aj,1,nj,1, . . . , a j,d,nj,d). Each intermediate node nj,lis generated based on the cumulative context of its parent nodes and the current action, i.e. nj,l=π([si, aj,1,nj,1, . . . , a j,l]). Specifically, the MCTS algorithm involves an iterative search process with four key steps: selection, expansion, simulation, and backpropagation: (1) Selection . This operation identifies optimal nodes for expansion. Starting from the root node, a child node is chosen at each tree level until reaching a leaf node, defined as achieving maximum tree depth or arriving at an answer here. To balance the exploration and exploitation, we employ the well-known Upper Confidence Bounds applied to Trees (UCT) [70] for node selection: UCT (s) =Q(s) +ws lnN(p) N(s)(8) where Q(s)is the reward value for node s,N(s)is the visit count, pis the parent node, and wis the exploration weight. The node with the highest UCT value is selected for subsequent phases, balancing exploration and exploitation. (2) Expansion . The selected node sis expanded by sampling nactions from πand generating corresponding reasoning outcomes. These nchild nodes are then added to the tree. (3) Simulation . Starting from the selected node, we iteratively sample and expand nodes until reaching a terminal state (maximum depth or final answer node). To enhance efficiency, we implement an 16 a4:One-step Anumber thatisamultiple of30must beamultiple of both 3and10. (a) Selection (b) Expansion (c) | https://arxiv.org/abs/2505.15692v2 |
Simulation (d) Backpropagation𝑠𝑠0Q:What istheleast positive integer multiple of30that can bewritten with only digit 0and2? 𝑠𝑠1a3:System Analysis Byanalyzing the problem, we need tosatisfy three consitions :(1)multiple of……Q:What istheleast positive integer multiple of30that can bewritten with only digit 0and2? … a4:One-step Anumber thatisamultiple of30must beamultiple of both 3and10.a3:System Analysis Byanalyzing the problem, we need tosatisfy three consitions :(1)multiple of…a4:One-step First, weknow that a integer multiple of30 hasfeatures ….... a1:divide -and-conquer SubQ :What must the last digit beforanumber tobea multiple of10?SubA :0. 𝑠𝑠2 𝑠𝑠𝑇𝑇 …Q:What istheleast positive integer multiple of30that can bewritten with only digit 0and2? … a4:One-step Anumber thatisamultiple of30must beamultiple of both 3and10.a3:System Analysis Byanalyzing the problem, we need tosatisfy three consitions :(1)multiple of… a1:divide -and-conquer SubQ :What must the last digit beforanumber tobea multiple of10?SubA :0.Q:What istheleast positive integer multiple of30that can bewritten with only digit 0and2? …a3:System Analysis Byanalyzing the problem, we need tosatisfy three consitions :(1)multiple of… a1:divide -and-conquer SubQ :What must the last digit beforanumber tobea multiple of10?SubA :0. a5:CoT Tobeamultiple of10,a number must endin0.Tobe amultiple of3,thesum of…a5:CoT Tobeamultiple of10,a number must endin0.Tobe amultiple of3,thesum of…Q rewardQ QFigure 8: An illustration of four phases in an iteration of MCTS for complex reasoning tasks. early termination strategy based on self-consistency [ 71]. This strategy exploits the observation that repeatedly sampled actions at the same state likely indicate successful task completion. If the model’s consistency score exceeds a threshold c, i.e., SC(s)> c, the simulation terminates early (4) Backpropagation . Upon simulation completion, node information is updated along the simulation paths0, ...sd. Visit counts are incremented ( N(s)←N(s) + 1 ), and node value Q(s)is propagated backward to its parent node pusing the following equation: Q(p)←(1−α)Q(p) +αQ(s) (9) where αis a discount factor for future rewards. For terminal nodes, following prior work [ 25], we adopt the likelihood (confidence) of self-consistency majority voting as the reward value, enabling supervision-free generalization. Through the above four process, we obtain a diverse set of solution traces T={t1,t2, . . . , tt}for question si∈ S. The MCTS algorithm will assign a final reward R(tj|si)to each trace tj∈T. Figure 8 illustrates the four phases in an iteration, expanding the tree and then updating reward values. A.2 Thought Library Construction For each question si∈ S, we obtain its solution tree through MCTS, which provides multiple concrete solution paths for si. As described in Section 2.3 in the main text, we then need to identify the best reasoning path for siand abstract it into generalizable thought patterns. To identify the optimal trajectory for each question si, we employ a balanced scoring metric proposed in HiAR-ICL [23]: Score (si,tj) =b·R(tj|si)−(1−b)·C(tj), (10) where C(tj)represents trajectory complexity (action count), and b(set to 0.95) balances solution quality against complexity. This scoring function selects trajectories that maximize accuracy while maintaining procedural conciseness among multiple potential solutions. For each question si∈ S, we select the optimal solution trace ti,bestthat maximizes this score. Since each node in ti,bestcorresponds to an instantiated action ai,l∈ A, we extract the general action sequence as a high-level thought pattern Tj= (a1, . . . , a d). For | https://arxiv.org/abs/2505.15692v2 |
instance, a successful solution might follow the pattern a1→a2→a4(System Analysis →One-Step Thought →Divide and Conquer). To organize these extracted patterns effectively, we introduce Problem Condition Complexity (PCC) [ 27,28] as a categorization metric. PCC quantifies the number of known prior conditions in a question siand can be calculated by the model π. Similar problems tend to share similar PCC values, making this metric effective for pattern aggregation. Through this process, each question si∈ S is associated with its optimal thought pattern, with some questions naturally sharing identical patterns (e.g. a1→a2→a4). Our final thought library L={ˆT1, . . . , ˆTs}consists of entries where each thought template ˆTjcontains both a high-level 17 thought pattern and the average PCC of questions sharing this pattern: ˆTj= (PCC Tj, Tj). These templates represent generalized problem-solving strategies and serve as external guidance for similar problems encountered during GRPO training. A.3 Adaptive Retrieval and Instantiation of Thought Patterns When encountering a new problem during GRPO training, we employ an adaptive retrieval mechanism to identify and apply the most relevant reasoning strategies from our thought library. This approach is grounded in meta-reasoning principles [ 29,30], which emphasize the importance of selecting appropriate problem-solving strategies based on problem characteristics. Adaptive Retrieval For each new incoming question qtencountered during GRPO sampling, we first compute its PCC metric. This complexity measure serves as a fingerprint that characterizes the question’s structure and difficulty. We then compare this value against the PCC values of all templates in our thought library Lby calculating the absolute distance for each template ˆTj∈ L: dj=|PCC qt−PCC Tj| (11) This distance metric quantifies how similar the current question’s complexity is to those problems from which each thought pattern was derived. Pattern Selection and Application After computing these distances, we rank the templates and select the kmost similar ones {ˆTi1, . . . , ˆTik}that minimize this distance measure. These selected templates contain high-level thought patterns that have proven effective for problems with similar complexity profiles. The retrieved thought patterns, which are sequences of abstract reasoning actions (a1, . . . , a d), guide the step-by-step reasoning process for question qt. During GRPO sampling, these patterns serve as external guidance that effectively balances between exploiting known successful strategies and allowing for model-internal exploration. This adaptive retrieval mechanism ensures that the model leverages appropriate reasoning strategies based on problem characteristics, rather than attempting to apply a one-size-fits-all approach. By dynamically matching problems with relevant thought patterns, our framework enables more targeted and effective sampling across diverse problem types. B Experimental Details In addition to the implementation details presented in the main text, we provide supplementary experimental details here. During training, we generate with rollout parameters of temperature=0.8 and top-p=0.95, and a maximum generation of 1500 tokens. The reward function is a binary accuracy metric verified by Math-Verify, defined as r(o) =1{ocontains the correct final answer }.Moreover, we employ a cosine learning rate decay with warm-up. The maximum learning rate is set at 3×10−6, and the warm-up ratio is set at 0.1. We use the same system prompt for all experiments, | https://arxiv.org/abs/2505.15692v2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.