text
string
source
string
large-scale moe models. Qwen Blog . Accessed on 2025-04-05. Mikayel Samvelyan, Sharath Chandra Raparthy, An- drei Lupu, Eric Hambro, Aram H. Markosyan, Manish Bhatt, Yuning Mao, Minqi Jiang, Jack Parker-Holder, Jakob Foerster, Tim Rocktäschel, and Roberta Raileanu. 2024. Rainbow teaming: Open- ended generation of diverse adversarial prompts. In Advances in Neural Information Processing Systems , volume 37, pages 69747–69786. Curran Associates, Inc. C. E. Shannon. 1948. A mathematical theory of com- munication. The Bell System Technical Journal , 27(3):379–423. Melissa K Stern and James H Johnson. 2010. Just no- ticeable difference. The corsini encyclopedia of psy- chology , pages 1–2. Robert Tarjan. 1972. Depth-first search and linear graph algorithms. SIAM Journal on Computing , 1(2):146– 160. Gemini Team, Rohan Anil, Sebastian Borgeaud, Jean- Baptiste Alayrac, Jiahui Yu, Radu Soricut, Johan Schalkwyk, Andrew M. Dai, Anja Hauth, Katie Millican, David Silver, Melvin Johnson, Ioannis Antonoglou, and othres. 2024. Gemini: A fam- ily of highly capable multimodal models. Preprint , arXiv:2312.11805. Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti 10 Bhosale, Dan Bikel, Lukas Blecher, and othres. 2023. Llama 2: Open foundation and fine-tuned chat mod- els.Preprint , arXiv:2307.09288. Peiyi Wang, Lei Li, Liang Chen, Zefan Cai, Dawei Zhu, Binghuai Lin, Yunbo Cao, Lingpeng Kong, Qi Liu, Tianyu Liu, and Zhifang Sui. 2024a. Large lan- guage models are not fair evaluators. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 9440–9450, Bangkok, Thailand. Association for Computational Linguistics. Yidong Wang, Zhuohao Yu, Zhengran Zeng, Linyi Yang, Cunxiang Wang, Hao Chen, Chaoya Jiang, Rui Xie, Jindong Wang, Xing Xie, Wei Ye, Shikun Zhang, and Yue Zhang. 2024b. Pandalm: An automatic evaluation benchmark for llm instruction tuning opti- mization. Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, Qingwei Lin, and Daxin Jiang. 2024a. WizardLM: Empow- ering large pre-trained language models to follow complex instructions. In The Twelfth International Conference on Learning Representations . Wenda Xu, Guanglei Zhu, Xuandong Zhao, Liangming Pan, Lei Li, and William Wang. 2024b. Pride and prejudice: LLM amplifies self-bias in self-refinement. InProceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 15474–15492, Bangkok, Thai- land. Association for Computational Linguistics. Yi Xu, Laura Ruis, Tim Rocktäschel, and Robert Kirk. 2025. Investigating Non-Transitivity in LLM-as-a- Judge. arXiv e-prints , arXiv:2502.14074. Jiayi Ye, Yanbo Wang, Yue Huang, Dongping Chen, Qihui Zhang, Nuno Moniz, Tian Gao, Werner Geyer, Chao Huang, Pin-Yu Chen, Nitesh V Chawla, and Xiangliang Zhang. 2025. Justice or prejudice? quan- tifying biases in LLM-as-a-judge. In The Thirteenth International Conference on Learning Representa- tions . Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, and 1 others. 2024. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Pro- cessing Systems , 36. Lianghui Zhu, Xinggang Wang, and Xinlong Wang. 2025. JudgeLM: Fine-tuned large language models are scalable judges. In The Thirteenth International Conference on Learning Representations
https://arxiv.org/abs/2505.17691v1
. Dongcheng Zou, Senzhang Wang, Xuefeng Li, Hao Peng, Yuandong Wang, Chunyang Liu, Kehua Sheng, and Bo Zhang. 2024. Multispans: A multi-range spatial-temporal transformer network for traffic fore- cast via structural entropy optimization. In Proceed- ings of the 17th ACM International Conference on Web Search and Data Mining , WSDM ’24, page1032–1041, New York, NY , USA. Association for Computing Machinery. A LLM Details This section provides a detailed overview of the models used to construct the training and testing sets in the experiment. We selected a total of 21 LLMs, covering advanced closed-source models, large-scale and small-scale open-source models, as well as dense and Mixture-of-Experts (MoE) architectures. Specifically, the selection includes the following: •OpenAI : OpenAI model series (OpenAI et al., 2024). •Anthropic : Claude model series(Anthropic, 2023, 2024). •Mistral : Mistral model series(Jiang et al., 2024, 2023). •Alibaba : Qwen model series(Qwen Team, 2024a,b). •Meta : LLaMA model series (Meta AI, 2024a,b; Touvron et al., 2023). •Yi: Yi model series(AI et al., 2025). •Google : Gemini model series(Team et al., 2024). •Other Notable Models : Vicuna 33B(Chiang et al., 2023), Tulu 2+DPO 70B(Ivison et al., 2023), and WizardLM 13B(Xu et al., 2024a). The pre-generated outputs for each model on the AlpacaEval dataset were sourced from the Al- pacaEval leaderboard. B Additional Experimental Results B.1 Experimental Results of Different Prompt Forms. We repeated the experiment using the CoT Com- parison (Tie Allowed) prompt template. Figure 4 illustrates the distribution of dataset sizes before and after filtering across different training sets. The test results are presented in Table 5. Across five testing sets, models fine-tuned on the "Cleaned" training set exhibit the highest prefer- ence clarity. In four of these testing sets (exclud- ing Vicuna), these models also demonstrate the lowest preference non-transitivity, outperforming 11 Helpful_Base Vicuna Oasst Koala Self-instruct Dataset010000200003000040000Data Volumes23478 1456034216 2839245864 17943 1088826490 2140535041Raw CleanedFigure 4: Comparison of Data V olumes Between "Raw" and "Cleaned" Training Sets Across Different Datasets (Using the CoT Comparison (Tie Allowed) Prompt Tem- plate) models fine-tuned on the "Raw" training set and the teacher model Qwen2.5-Max. We observe that models fine-tuned on the "Raw" training set, as well as the teacher model Qwen2.5-Max, exhibit a higher rate of ties in preferences on the Vicuna testing set. The prevalence of such ties—often aris- ing from SCCs formed by cases like (A=B), (B=C), and (C=A)—significantly reduces the degree of preference non-transitivity when these SCCs are removed. As shown in Table 4, we com- pare the probability of ties among three models on the Vicuna testing set. Furthermore, the τ↓ avgvalues of models fine-tuned on the "Cleaned" training set indicate a higher overall clarity in preferences. Model tie probability Qwen2.5-Max 7.59% Qwen-Raw 10.06% Qwen-Cleaned 6.13% Table 4: Comparison of the probability of ties for mod- els on the Vicuna testing set. B.2 Experimental Results of Different Base Model. We repeated the experiment using the LLaMA3.1- 8B-Instruct model as the base model. The test results are presented in Table 6. We also compared our results with the advanced open-source Evalua- tor LLM Auto-J 13B, which is fine-tuned based
https://arxiv.org/abs/2505.17691v1
on the LLaMA-2-13B-Chat model. Due to the limited context length of 2048 tokens supported by the advanced open-source evaluators PandaLM and JudgeLM, which is insufficient to cover the evaluation tasks in our testing set, we did not include them in our experiments.B.3 "Unseen" question validation We perform a detailed comparison of the perfor- mance of models fine-tuned on different training sets and the advanced Evaluator LLM on various testing sets: •Results for models based on Qwen2.5-7B- Instruct are presented in Table 7. •Results for models based on LLaMA3.1-8B- Instruct are presented in Table 8. C Evaluate Prompts C.1 CoT Comparison The CoT Comparison prompt template, identical to AlpacaEval 2.0 (Li et al., 2023), is detailed in Table 9. C.2 CoT Comparison (Tie Allowed) The CoT Comparison (Tie Allowed) prompt tem- plate, as described in (Xu et al., 2025), is detailed in Table 10. D Under What Scenarios Are Evaluator LLMs Prone to Non-Transitive Preferences? Evaluator LLMs, much like human judges, tend to exhibit non-transitive preferences under certain conditions. First, when the quality differences between re- sponses fall below the Evaluator LLM’s just notice- able difference (JND) threshold (a concept widely used in psychophysics to describe the minimum difference perceptible to an observer (Stern and Johnson, 2010)) the model may "lose evaluative focus." Analogous to how humans struggle to dif- ferentiate between highly similar colors on a gra- dient from yellow to orange, the Evaluator LLM may perceive adjacent responses as "almost equally good." While individual comparisons appear con- sistent, cumulative evaluations can yield preference cycles (e.g., A≻B,B≻C,C≻A), reflecting underlying non-transitivity. Second, when responses differ significantly in stylistic dimensions, the Evaluator LLM may ex- hibit indecision due to the inherent conflict in multi- dimensional evaluation. Consider an open-ended question evaluated along three criteria: rigor, vivid- ness, and richness: •Response A is the most rigorous, moderately vivid, and least rich. 12 •Response B is the richest, moderately rigor- ous, and least vivid. •Response C is the most vivid, moderately rich, and least rigorous. If overall preference is defined as outperforming another response in at least two out of three di- mensions, instances such as A≻B,B≻C, andC≻Amay arise—resembling the classic voting paradox, also known as the Condorcet para- dox (Fishburn, 1974). This reflects a fundamental limitation in aggregating preferences across con- flicting criteria: when each response possesses a distinct advantage, constructing a consistent global ranking becomes infeasible. The problem is es- pecially pronounced in open-ended evaluations, where stylistic variance (e.g., concise and rigor- ous analysis vs. vivid and elaborate narratives) can lead to non-transitive outcomes in Evaluator LLM preferences. E Algorithm DetailAlgorithm 1 Filtering Strategy for Preference Data That Induces Non-Transitivity Require: Cyclic directed graph G= (V, E) Ensure: Cleaned dataset with transitive prefer- ence relations 1:Decompose Ginto strongly connected compo- nents SCCs :{SCC 1, SCC 2, . . . , SCC n} 2:foreachSCC i∈ {SCC 1, . . . , SCC n}do 3: foreach vertex vk∈SCC ido 4: Compute in-degree score ein k 5: end for 6: Remove edges within SCC i 7: foreach pair of vertices (vi, vj)∈SCC i do 8: ifein i>
https://arxiv.org/abs/2505.17691v1
ein jthen 9: Add edge (vj→vi) 10: else if ein i=ein jthen 11: Add bidirectional edge (vi↔vj) 12: end if 13: end for 14:end for 15:Combine all modified SCC icomponents to form a global DAG G′ 16:foreach edge in G′do 17: ifedge is (vi→vj)then 18: SetJ(ai, aj) = ‘lose’ ,J(aj, ai) = ‘win’ 19: else if edge is (vi↔vj)then 20: SetJ(ai, aj) =J(aj, ai) =‘tie’ 21: end if 22:end for 23:Initialize empty dataset Cleaned 24:foreach data point in original dataset do 25: ifpreference relation matches Jthen 26: Add data point to Cleaned 27: end if 28:end for 29:return Cleaned 13 ModelHelpful_Base Vicuna Oasst Koala Self-instruct ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg Qwen2.5-Max 76.97% 0.8654 78.04% 0.9231 79.71% 0.8659 76.31% 0.8923 71.77% 0.8595 Qwen-Base 85.94% 0.9371 90.71% 0.9602 84.88% 0.9459 81.96% 0.9626 76.64% 0.8853 Qwen-Raw 79.07% 0.9244 83.21% 0.9396 77.58% 0.9066 78.66% 0.9078 73.87% 0.8853 Qwen-Cleaned (ours) 74.42% 0.8596 85.89% 0.8922 73.94% 0.8564 72.34% 0.8403 69.05% 0.8288 Table 5: Comparison of Preference Non-Transitivity andOverall Clarity for Evaluator LLMs( using the CoT Comparison (Tie Allowed) prompt template). Qwen-Base denotes to the original Qwen2.5-7B-Instruct model, while Qwen-Raw, Qwen-Random, and Qwen-Cleaned denote models fine-tuned on the "Raw" and"Cleaned" training sets, respectively. For example, Qwen-Cleaned in the Helpful_Base column reflects the performance of the model fine-tuned on the "Cleaned" training set derived from filtered Helpful_Base data. ModelHelpful_Base Vicuna Oasst Koala Self-instruct ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg Auto-J-13B 51.50% 0.7049 63.57% 0.7765 54.86% 0.7101 53.94% 0.7213 58.90% 0.7371 LLaMA-Base 76.41% 0.8518 66.96% 0.8461 67.40% 0.7933 69.87% 0.8180 71.03% 0.8091 LLaMA-Raw 59.03% 0.7654 60.18% 0.7718 58.21% 0.7595 57.22% 0.7457 59.95% 0.7830 LLaMA-Cleaned (ours) 40.20% 0.6523 45.36% 0.6910 43.01% 0.6288 44.38% 0.6591 41.95% 0.6423 Table 6: Comparison of Preference Non-Transitivity andOverall Clarity for Evaluator LLMs. LLaMA-Base refers to the original LLaMA3.1-8B-Instruct model, while LLaMA-Raw, LLaMA-Random, and LLaMA-Cleaned denote models fine-tuned on the "Raw" and"Cleaned" training sets, respectively. For example, LLaMA-Cleaned in the Helpful_Base column reflects the performance of the model fine-tuned on the "Cleaned" training set derived from filtered Helpful_Base data. Auto-J 13B is fine-tuned based on the LLaMA-2-13B-Chat model. ModelHelpful_Base Vicuna Oasst Koala Self-instruct Average ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg Stronger LLMs Qwen2.5-Max 63.68% 0.8047 75.36% 0.8448 64.29% 0.7883 71.52% 0.8296 64.97% 0.7803 67.96% 0.8095 Base Model Qwen-Base 82.83% 0.9221 78.93% 0.8912 83.36% 0.9138 80.96% 0.9097 81.46% 0.9123 81.51% 0.9098 Cleaned group Qwen-Helpful_Base-Cleaned 44.85% 0.6997 48.93% 0.7371 52.13% 0.7137 53.75% 0.7223 51.98% 0.7138 50.33% 0.7173 Qwen-Vicuna-Cleaned 47.18% 0.7395 43.93% 0.7262 52.43% 0.7276 53.48% 0.7418 57.77% 0.7483 50.96% 0.7367 Qwen-Oasst-Cleaned 45.74% 0.7040 45.00% 0.7109 46.96% 0.6942 54.85% 0.7186 51.08% 0.7227 48.73% 0.7101 Qwen-Koala-Cleaned 50.28% 0.7357 46.79% 0.7045 48.94% 0.7183 48.53% 0.7149 48.70% 0.7150 48.65% 0.7177 Qwen-Self-instruct-Cleaned 56.26% 0.7504 51.79% 0.7489 54.94% 0.7370 57.23% 0.7590 48.98% 0.6802 53.84% 0.7351 Raw group Qwen-Helpful_Base-Raw 62.02% 0.7963 74.29% 0.8626 63.07% 0.8109 69.87% 0.8385 60.49% 0.7936 65.95%
https://arxiv.org/abs/2505.17691v1
0.8204 Qwen-Vicuna-Raw 61.13% 0.8147 57.50% 0.8034 64.59% 0.8019 60.35% 0.8150 64.23% 0.7966 61.56% 0.8063 Qwen-Oasst-Raw 60.58% 0.8099 70.89% 0.8101 55.78% 0.7734 63.83% 0.8150 61.96% 0.7746 62.61% 0.7966 Qwen-Koala-Raw 64.23% 0.8228 67.86% 0.8090 64.59% 0.7966 64.29% 0.8159 59.30% 0.7779 64.05% 0.8044 Qwen-Self-instruct-Raw 68.99% 0.8569 71.79% 0.8595 68.84% 0.8167 66.85% 0.8433 59.69% 0.7665 67.23% 0.8286 Table 7: Comparison of Preference Non-Transitivity andOverall Clarity for Evaluator LLMs. Qwen-Base denotes to the original Qwen2.5-7B-Instruct model. Raw group andCleaned group refer to models fine-tuned on the original and filtered training sets, respectively. Specifically, Qwen-Helpful_Base-Raw is fine-tuned on the "Raw" training data generated from the Helpful_Base dataset, while Qwen-Helpful_Base-Cleaned is fine-tuned on the corresponding "Cleaned" dataset after removing non-transitive preference data. 14 ModelHelpful_Base Vicuna Oasst Koala Self-instruct Average ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg ρ↓ non-trans τ↓ avg Stronger LLMs Qwen2.5-Max 63.68% 0.8047 75.36% 0.8448 64.29% 0.7883 71.52% 0.8296 64.97% 0.7803 67.96% 0.8095 Base Model Auto-J-13B 51.50% 0.7049 63.57% 0.7765 54.86% 0.7101 53.94% 0.7213 58.90% 0.7371 56.55% 0.7300 LLaMA-Base 76.41% 0.8518 66.96% 0.8461 67.40% 0.7933 69.87% 0.8180 71.03% 0.8091 70.33% 0.8237 Cleaned group LLaMA-Helpful_Base-Cleaned 40.20% 0.6523 47.86% 0.6810 49.92% 0.6827 50.92% 0.7003 55.33% 0.7300 48.85% 0.6893 LLaMA-Vicuna-Cleaned 49.83% 0.7139 45.36% 0.6910 48.02% 0.6863 53.02% 0.7295 57.14% 0.7408 50.67% 0.7123 LLaMA-Oasst-Cleaned 50.28% 0.6655 47.68% 0.6812 43.01% 0.6288 50.73% 0.7034 54.25% 0.7056 49.19% 0.6769 LLaMA-Koala-Cleaned 44.85% 0.6588 46.61% 0.6826 44.38% 0.6591 45.51% 0.6827 50.11% 0.7031 46.29% 0.6773 LLaMA-Self-instruct-Cleaned 40.42% 0.6373 49.29% 0.6951 41.95% 0.6423 48.99% 0.6959 46.15% 0.6652 45.36% 0.6672 Raw group LLaMA-Helpful_Base-Raw 59.03% 0.7654 60.71% 0.7888 59.42% 0.7682 61.36% 0.7965 63.78% 0.7886 60.86% 0.7815 LLaMA-Vicuna-Raw 56.92% 0.8028 60.18% 0.7718 58.81% 0.7646 60.35% 0.7943 59.69% 0.7644 59.19% 0.7797 LLaMA-Oasst-Raw 59.80% 0.7821 64.64% 0.7878 58.21% 0.7596 67.49% 0.8062 63.89% 0.7745 62.81 0.7820 LLaMA-Koala-Raw 58.03% 0.7532 53.75% 0.7601 57.22% 0.7457 57.23% 0.7597 65.59% 0.7887 58.36% 0.7615 LLaMA-Self-instruct-Raw 62.79% 0.7860 64.11% 0.7885 59.95% 0.7830 65.02% 0.8054 56.52% 0.7559 61.68% 0.7838 Table 8: Comparison of Preference Non-Transitivity andOverall Clarity for Evaluator LLMs. LLaMA-Base refers to the original LLaMA3.1-8B-Instruct model. Raw group andCleaned group refer to models fine-tuned on the original and filtered training sets, respectively. Specifically, LLaMA-Helpful_Base-Raw is fine-tuned on the "Raw" training data generated from the Helpful_Base dataset, while LLaMA-Helpful_Base-Cleaned is fine-tuned on the corresponding "Cleaned" dataset after removing non-transitive preference data. 15 System You are a highly efficient assistant, who evaluates and selects the best large language model (LLMs) based on the quality of their responses to a given instruction. This process will be used to create a leaderboard reflecting the most accurate and human- preferred answers. User I require a leaderboard for various large language models. I’ll provide you with prompts given to these models and their corresponding outputs. Your task is to assess these responses and select the model that produces the best output from a human perspective. ## Instruction { "instruction": """{instruction}""", } ## Model Outputs Here are the unordered outputs from the models. Each output is associated with a specific model, identified by a unique model identifier. { { "model_identifier": "m", "output": """{output_1}""" },
https://arxiv.org/abs/2505.17691v1
{ "model_identifier": "M", "output": """{output_2}""" } } ## Task Evaluate the models based on the quality and relevance of their outputs, and select the model that generated the best output. Answer by first providing a concise explanation and then end your answer by providing the model identifier of the best output. We will use the last character of your output ‘output[-1]‘ as the name of the best model, so make sure you finish with the token of the model identifiers and nothing else: ‘m‘ or ‘M‘ (no quotes, no dots, no backticks, no new lines, ...). For example: ### Concise explanation ...some text... ### Which is best, m or M? M Now is your turn. ## Your answer: "Concise explanation" followed by "Which is best, m or M?" Table 9: The Chain-of-Thought Comparison prompt for pairwise comparison. 16 System You are a highly efficient assistant, who evaluates and selects the best large language model (LLMs) based on the quality of their responses to a given instruction. This process will be used to create a leaderboard reflecting the most accurate and human- preferred answers. User I require a leaderboard for various large language models. I’ll provide you with prompts given to these models and their corresponding outputs. Your task is to assess these responses, and select the model that produces the best output from a human perspective. If you determine that both outputs are of equal quality or are unable to decide which one is better, you should indicate a tie by providing the identifier ‘D‘. ## Instruction { "instruction": """{instruction}""", } ## Model Outputs Here are the unordered outputs from the models. Each output is associated with a specific model, identified by a unique model identifier. { { "model_identifier": "m", "output": """{output_1}""" }, { "model_identifier": "M", "output": """{output_2}""" } } ## Task Evaluate the models based on the quality and relevance of their outputs, and select the model that generated the best output. Answer by first providing a concise explanation and then end your answer by providing the model identifier of the best output. If you determine that both outputs are of equal quality or cannot decide which one is better, indicate a tie by using the identifier ‘D‘. We will use the last character of your output ‘output[-1]‘ as the name of the best model, so make sure you finish with the token of the model identifiers and nothing else: ‘m‘, ‘M‘ or ‘D‘ (no quotes, no dots, no backticks, no new lines, ...). For example: ### Concise explanation ...some text... ### Which is best, m, M or D? M Now is your turn. ## Your answer: "Concise explanation" followed by "Which is best, m, M or D?" Table 10: The Chain-of-Thought Comparison prompt (Tie Allowed) for pairwise comparison. 17
https://arxiv.org/abs/2505.17691v1
arXiv:2505.17697v1 [cs.CL] 23 May 2025Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models Zekai Zhao∗, Qi Liu∗, Kun Zhou†, Zihan Liu, Yifei Shao, Zhiting Hu, Biwei Huang University of California, San Diego. kuzhou@ucsd.edu Abstract Despite the remarkable reasoning performance, eliciting the long chain-of- thought (CoT) ability in large language models (LLMs) typically requires costly reinforcement learning or supervised fine-tuning on high-quality distilled data. We investigate the internal mechanisms behind this capability and show that a small set of high-impact activations in the last few layers, greatly govern the long-form rea- soning attributes, e.g.,output length and self-reflection. Through simply amplifying these activations and adding “wait” tokens, the long CoT ability can be invoked without training, leading to significantly increased self-reflection rate and accuracy. In addition, we also find that the activation changes follow predictable trajectories, i.e.,a sharp rise after special tokens and a subsequent exponential decay. Based on these insights, we introduce a general training-free activation control technique. It utilizes a few contrastive examples to identify the relevant activations, and then incorporates simple analytic functions to adjust their values at inference time to elicit long CoTs. Extensive experiments have verified the effectiveness of our methods in efficiently eliciting the long CoT ability of LLMs and improving the performance. Besides, we further propose a parameter-efficient fine-tuning method that trains only the last-layer activation amplification module and a few LoRA lay- ers, outperforming LoRA on reasoning benchmarks with much fewer parameters. Our code and data are fully public released https://github.com/ZekaiZ123/EELo- CoT/tree/main. 1 Introduction On the path to artificial general intelligence (AGI), enhancing the reasoning ability of large language models (LLMs) [ 1,2,3,4] remains to be one of the most important challenges. Techniques like chain-of-thought (CoT) prompting [ 5] can elicit good performance on reasoning tasks by prompting the LLM to generate intermediate reasoning steps. However, in complex reasoning tasks ( e.g., math competition problems), such a way is prone to make mistakes in intermediate steps, finally failing to reach the accurate answer. Recently, long-CoT models and systems such as OpenAI-o1 [ 3] and DeepSeek-R1 [ 1] have exhibited remarkable performance in solving complex tasks. These methods can perform human-like slow-thinking behaviors, with quite a few careful deliberation and self-reflection steps before generating the final answer [1]. To elicit the long CoT ability, existing work relies on either reinforcement learning (RL) on high- quality instances with proper annotations, or supervised fine-tuning (SFT) on distilled data [ 6,7,8]. However, in practice, it is costly to collect enough well-annotated high-quality instances or distilled data [ 7], especially for complex reasoning tasks that humans may also not perform well. Besides, the RL process is also complicated, and requires expert knowledge to control the training stability and ∗Equal Contribution †Corresponding Author Preprint. Under review. effectiveness [ 9,10]. In fact, the key bottleneck comes from the lack of understanding of how the long CoT ability is elicited in LLMs . Since LLMs have undergone a very long pre-training stage and then a relatively short post-training stage using RL or SFT, it is possible that the long-CoT ability already exists
https://arxiv.org/abs/2505.17697v1
in base models [ 1,11]. If so, we can efficiently wake up the long-CoT ability when necessary, and exert influence to achieve more fine-grained controls. In LLM, activation values of MLP layers play a key role in preserving key information or styles during inference [ 12,13]. Inspired by it, we analyze and compare the activation dynamics in state-of-the-art LLMs with and without the long-CoT ability during inference on complex reasoning tasks. By tracing the layer-wise activations, we identify specific positions and patterns that correlate with desirable CoT traits ( e.g., longer and with self-reflection). These activations generally locate in the last few layers (see Fig. 1). By simply amplifying them and adding the “wait” token, the long CoT ability can be invoked without training (see Fig. 3). In addition, we also find that the activation changes in these positions follow predictable patterns, i.e.,a sharp rise after the trigger tokens ( e.g., wait) and a subsequent exponential decay (see Fig. 4). Therefore, it is promising to utilize an analytic function to fit the pattern and mimic the activation to efficiently elicit the long-CoT ability. Based on the above findings, we propose a training-free activation control approach, to elicit the long-CoT ability in the inference process. Concretely, we first collect a few pairs of contrast examples about the long-CoT ability. Then, we use the contrast examples to identify the related activation positions and collect patterns in the base model. Next, we formulate the pattern of each activation value into a unified function with few coefficients to control the change tendency and intervention intensity. Based on the collected patterns, we fit these coefficients, and devise a simple rule that triggers intervention after encountering special tokens during inference. Since we do not need to train the LLM and the activation control method only requires to use base models, our approach is general to any LLMs and any datasets for efficiently elicit the long-CoT ability when necessary. Extensive experiments conducted on complex math and science reasoning benchmarks have demonstrated the effectiveness of our approach in improving the LLM performance (see Table 1) with special long-CoT style. We also reveal that the long-CoT related activation positions become more inactive or even dead in over-posttrained LLMs. Since we can identify the key activations within the LLM, we also propose a parameter-efficient fine-tuning method that focuses on automatically learning the activation pattern from long-CoT related training data. Concretely, we add a rather lightweight learnable amplification module for the identified activations in the last layer, and add LoRA layers with a low rank in the former layers, while fixing the parameters of other parameters. In this way, our approach only requires to train much fewer parameters than existing LoRA method. Experimental results have shown that our approach can achieve better performance than LoRA. It achieves comparable performance of full-parameter fine-tuning method, but only trains 1.51% of total parameters. 2 Empirical Analysis on the Activation Patterns of Long CoT Ability In this section, we empirically analyze the activation patterns underlying the long CoT reasoning ability, since existing work has shown that activation
https://arxiv.org/abs/2505.17697v1
values can preserve key information of styles or concepts [ 12]. We conduct a series of experiments on state-of-the-art Qwen2.5-7B series LLMs, and first study (1) the distribution of long-CoT related activations, then test (2) whether activation intervention can elicit long-CoT ability, finally analyze (3) whether there are predictable dynamic patterns in activation values during inference. 2.1 Long-CoT Related Activations Distribution Study We aim to reveal how the key activation values are distributed within an LLM and the difference between LLMs with and without the long-CoT ability. We first collect a set of contrastive example pairs about the long-CoT characteristics, which are fed to the LLM to collect the long-CoT related activations. Then, we analyze the distribution of the top ones across layers and different LLMs. Analysis Setup. We conduct the experiments on MATH [ 14], a widely used mathematical reasoning dataset. For the contrastive example pairs, we consider the following three key characteristics of long-CoT ability, i.e.,longer, with self-reflection, and more accurate. Thus, we build two datasets that contain positive and negative samples that satisfy and unsatisfy the above features, respectively. 2 0 4 812 16 20 2427 Layer index02004006008001000Activation Value > 4R1-distill-Qwen-7B Qwen2.5-Math-7B-base Qwen2.5-7B-baseFigure 1: (a) Sparse Activations when processing Long CoT 1.0 1.2 1.4 1.6 Amplification factor606570Accuracy (%) Constant Intervene Wait Token InsertFigure 2: (b) Model Accuracy and Amplification Scale 1.0 1.2 1.4 1.6 Amplification factor1020304050Selfreflection rate (%) Constant Intervene Wait Token InsertFigure 3: (c) Self-Reflection Ra- tio and Amplification Scale Concretely, we randomly sample 160 questions from the training set of MATH dataset and generate CoT responses using two models, i.e.,R1-distilled-Qwen-7B [ 1] and Qwen-2.5-7B-Instruct [ 15]. As the former one has acquired the long-CoT ability distilled from DeepSeek-R1 while the last one not, we sample the positive and negative ones from their responses, respectively. Then, we feed each question with corresponding positive and negative responses into three LLMs ( i.e.,R1-distilled-Qwen- 7B, Qwen2.5-7B-Math-base, and Qwen2.5-7B-base), and compute the activation value difference in all the MLP layers. Finally, we average all the activation value difference, and select the ones with higher difference ( i.e.,>4) as the long-CoT ability related activations. Finding-1: Long-CoT Activations Mainly Exist in Last Few Layers. As shown in Fig. 1, only very few long-CoT related activations exist in the former layers, i.e.,near zero before the 18-th layer. After that, the number of long-CoT activations increases in a near-linear tendency, and the last layer even contains more than 50% long-CoT activations of the LLM. It indicates that the last few layers contribute more on the long-CoT ability. Finding-2: Long-CoT LLM Contains More Long-CoT Related Activations. By comparing the curves of R1-distilled-Qwen-7B with other two LLMs, we can see its long-CoT related activations are consistently more than the two models. Thus, we have the hypothesis that activation matters in eliciting the long-CoT ability, and will verify it then. 2.2 Ability Control through Activation Intervention Study We study whether the related activations found can be used to elicit the long-CoT ability. We adopt a rather simple way that consistently amplifies the values of these activations and adds “wait”
https://arxiv.org/abs/2505.17697v1
token during inference, and observe whether the accuracy and self-reflection rate can also increase. Analysis Setup. We first use the above contrastive example pairs to measure the long-CoT ability correlation of each activation. Then, we rank all the activations, and select the top-200 ones as the key activations. For intervention, we use the following amplification factors, i.e.,1.2, 1.4, and 1.6, and larger factors would cause the generation unstable. In our experiments, we see that only amplifying the activations can not effectively lead to stable self-reflection action. Thus, we use a simple way that inserts the “wait” token at the start of the sentence once the last generated sentence contains the math equation. It serves as a trigger token to force LLMs to perform self-reflection. We visualize the accuracy and self-reflection rate in the test set of MATH dataset. The self-reflection rate is computed by computing the percentage of responses that contain special reflection tokens and phrases, e.g., “wait” and “let me double check”. Finding-3: Simple Activation Amplification with Wait Token Insert Induces Long-CoT Rea- soning. As shown in Fig. 2 and Fig. 3, amplifying the activation value with the wait token insert strategy can significantly improve both the self-reflection rate and accuracy. As the case shown in Appendix, we can see that the generated response follows the long-CoT reasoning style in existing long-CoT LLMs. Here, the wait token insert trick to force self-reflection also matters. Without it, the increase of self-reflection rate is slow and the accuracy improvement is also not stable. Therefore, it is promising to design a more effective method by further optimizing the trigger token and neuron intervention strategies, to efficiently elicit long-CoT ability. 3 50 0 50 100 Position relative to target token0.00.20.40.6Avg activation 'Wait' '1' 'the'Figure 4: (a) Wait Token Insert Induces Long-CoT Reasoning 0 250 500 750 1000 Token02468Activation ValueQwen2.5-7B-base R1-distill-Qwen-7BFigure 5: (b) Activation Patterns of base and long CoT LLMs 0 500 1000 Token051015Activation ValueQwen2.5-7B-InstructFigure 6: (c) Activation Pattern of Qwen2.5-7B-Instruct 2.3 Activation Dynamics Analysis To help design appropriate trigger token and activation intervention strategies, we further conduct the qualitative study to analyze the activation dynamics of different LLMs. We aim to find out whether each activation has a special pattern that can be predictable during inference. Analysis Setup. For analyzing the activation dynamics, we track the inference-time value changes of the top-1 activation (found using contrastive example pairs) from Qwen2.5-7B-base, Qwen2.5- 7B-Instruct, and R1-distill-Qwen-7B. We randomly select few questions from MATH test set, and have seen very similar tendency across them. Thus, we visualize a random one in Fig. 5. Besides, we also see that the high activation values often appear after the “wait” token during inference. Thus, we collect the activation values and their relative positions to few special tokens, and then draw the figure to visualize the different activation change tendencies around different special tokens. Finding-4: Base and Long-CoT Models Exhibit Similar Sparse Activation Dynamics. As shown in Fig. 4, the top-1 activation of the two LLMs is activated with a very sparse pattern during inference. It is mostly near zero value,
https://arxiv.org/abs/2505.17697v1
but activated into a relatively high value in special positions (e.g., “wait” token). We see that the high-value positions of the two LLMs are mostly the same ones, and the corresponding values are also very similar. It indicates that the base and long-CoT models have similar sparse activation dynamics. Based on this finding, it is promising to use the base LLM itself to predict the long-CoT related activations. Finding-5: Instruct Model Activations are Very Inactive. As shown in Fig. 6, the activations of the instruct model are mostly near zero. In contrast to the base model, it is very inactive and even like “dead” activations. A possible reason is that these activations have been biased after learning large-scale short instructions in the post-training stage. This may make it hard for the LLM to adapt to this new reasoning pattern, as shown by other attempts to elicit the long-CoT ability of LLMs [ 16,1]. Finding-6: Activations around Wait Token Have Predictable Pattern. As shown in Fig. 4, the activation value curves around the common token “the” and digital token “1” both contain very few changes, while the one of the wait tokens has a rather outstanding sharp rising then falling tendency. It indicates that the wait token is a special trigger token that the long-CoT LLM has learned for waking up the activation action. Besides, the falling part of the wait token curve is likely to follow a logarithmic decay function. As the above trigger word and activation patterns are very significant, it is promising to design a function to mimic them for eliciting the long-CoT ability of LLMs. 3 Training-free Activation Control Method According to our findings in Section 2.1, the activation patterns of long-CoT ability are predictable. Therefore, it is feasible to efficiently elicit long-CoT reasoning through activation control. In this section, we develop a training-free activation control method, namely EELo-CoT, and evaluate its effectiveness on three complex reasoning tasks. 4 sqrt 4 2 = \nWait 4 2 = \n Letusverify… AttentionMLP AttentionMLP AttentionMLP Previous LayersAmplify ModuleMLP Layer Self-Attention Wait Token Insert Activation Identify Parameter -efficient TuningLoRA Training -freeActivation Controlf(t) = a -b ln(t + c)Figure 7: The overall framework of the proposed Long-CoT elicitation method. Based on the identified activations, the left part is the proposed training-free activation control method, and the right part is the parameter-efficient training method. 3.1 Methodology Our method is composed of the analytic function based activation intervention and the forcing reflection strategies. Given the LLM, our method only requires few contrastive example pairs to help fit a function, and then can elicit it to perform long-CoT reasoning when necessary. Activation Amplification with Analytic Function. According to Finding-6 in Section 2.3, the activation values associated with the long-CoT ability follow a distinct logarithmic decay function (e.g.,f(t) =−log(t)). To capture this, we utilize the contrastive example pairs to identify key activations (following Section 2), and then collect the value trajectories of all activations across a fixed token window following the token “Wait”. Next, we use the above data to fit the following function. Here, we show the
https://arxiv.org/abs/2505.17697v1
computed coefficients of Qwen2.5-7B-base as an example: f(t) =a−b·log(t+c)where a= 0.17, b= 0.033, c=−0.997 (1) As shown in Fig. 4, this curve can well capture the activation changes of Qwen2.5-7B-base. Based on it, we further design an activation amplification rule. Let Adenote the original activation value of the LLM at the current token, where tis its relative distance with the trigger token. The new activation A′is computed as: A′=A·(1 +αf(t)), (2) where f(t)produces the reference value of the amplification, and αis the tunable scaling factor. Forcing Reflection after Reasoning. Based on the above activation intervention strategy, we devise the forcing reflection strategy to support it, for deliberating the last reasoning step if necessary. Concretely, we leverage the number of digits in the last sentence as the metric to determine if perform forcing reflection. Once we detect kor more digits, we insert a “wait” token in the starting position of the next sentence. Then, the next sentence will continue to perform self-reflection after the “wait” token, and meanwhile the analytic function based intervention strategy will also be activated to guide the LLM. In addition, we also add a cool-down window setting that temporarily locks down the forcing reflection strategy in the next four sentences after its execution. Such a way prevents the LLM from repeating meaningless self-reflection during inference. 3.2 Experimental Settings We introduce the details of our experimental setting to evaluate the training-free method. Datasets. We select the following three benchmarks for evaluation: 5 Table 1: The evaluation results of our method using Qwen2-7B-base, Qwen2.5-7B-base, Qwen2.5- Math-7B-base on Math500, AMC23 and GPQA ScenariosMath500 AMC23 GPQA Acc. Length Reflect Acc. Length Reflect Acc. Length Reflect Qwen2-7B-base 30.80 685.52 3.20 12.50 795.75 2.50 26.77 494.35 6.06 + Forcing Reflection 30.00 1019.13 65.20 10.00 1029.2 70.00 26.77 781.29 66.67 + Constant Intervention 28.60 761.64 3.40 7.50 729.83 7.50 28.28 484.92 5.56 + Forcing & Constant 29.20 990.91 65.00 20.00 1096.88 80.00 26.77 856.33 64.65 + EELo-CoT (Ours) 31.20 958.74 65.20 20.00 1146.2 77.50 30.30 774.31 65.15 Qwen2.5-7B-base 69.20 328.20 10.20 45.00 436.15 7.50 30.30 457.34 4.04 + Forcing Reflection 66.00 376.75 47.80 40.00 613.33 62.50 33.33 598.06 68.69 + Constant Intervention 69.20 329.32 11.40 45.00 488.23 17.50 33.33 466.95 5.56 + Forcing & Constant 66.40 384.76 45.40 47.50 583.62 80.00 31.31 598.22 71.72 + EELo-CoT (Ours) 72.00 369.25 49.40 57.50 443.52 70.00 35.86 585.78 68.18 Qwen2.5-Math-7B-base 68.00 381.67 73.80 65.00 547.70 60.00 33.84 476.88 28.28 + Forcing Reflection 64.00 424.32 88.40 57.50 549.50 90.00 32.83 650.62 78.79 + Constant Intervention 64.20 381.28 71.40 45.00 583.02 65.00 33.33 461.90 29.29 + Forcing & Constant 62.60 416.90 90.20 47.50 639.83 97.50 34.34 515.15 78.79 + EELo-CoT (Ours) 76.00 441.00 90.60 65.00 625.50 95.00 37.88 552.49 78.79 •MATH [14]: it consists of 500 high-school level math problems across algebra, geometry, calculus, and number theory. •AMC23 : it consists of problems from the 2023 American Mathematics Competitions (AMC 10 and AMC 12), covering challenging multi-choice problems designed for high school students. •GPQA-Diamond [17]: GPQA benchmark focuses on high-complexity questions. We select the Diamond split that includes only the most
https://arxiv.org/abs/2505.17697v1
difficult examples. Baselines. In our results table, the Forcing Reflection row corresponds to the intervention where the token "wait" is inserted to encourage reflective reasoning. Constant Intervention represents a baseline where a fixed amplification factor is applied to selected neuron activations. Forcing & Constant combines both the reflection-triggering token and the constant amplification strategy. Finally, Our Rule refers to our proposed intervention method, which applies a dynamic, activation-based scheduling function to modulate neuron behavior. Implementation Details. We randomly sample 2,000 problems from the MATH training dataset and generate CoT responses using two models: the R1-distilled Qwen 7B and the Qwen 2.5 7B Instruct. We filter these into two groups: Group 1: Contains responses that show self-reflection (e.g., “wait”, “let me double check”), exceed 1000 tokens, and are correct. Group 2: Contains responses with no self-reflection, shorter than 1000 tokens, and incorrect. After filtering, we obtain 160 matched question pairs. Each CoT is passed through a model while recording the MLP neuron activations. We compute the per-neuron activation differences between the two groups. The self-reflection ratio reflects the presence of self-reflective behaviors in the generated CoTs, which is computed by matching a curated list of reflective phrases ( i.e.,“let me double check”, “wait”, “verify”). We set the minimum number of digits kin the last sentence to insert “wait” token in the starting position of the next sentence as 5. The cool-down window setting that temporarily locks down the forcing reflection is set as 4. The number of activations we used are 150 with amplification factor set as 4. 3.3 Results Analysis Main Results. The results of our method and baseline in our evaluation are presented in Table 1. We can observe that our method achieves the highest accuracy on both base models on Math500: 72.00 %for Qwen2.5-7B-base and 76.00 %for Qwen2.5-7B-Math-base, surpassing constant and trigger-based interventions by up to 6 percentage points. The self-reflection rate also increases substantially, reaching 90 %with our rule compared to only 10.2 %in the original base and 73.8 % in the math-tuned model. Our method balances this with moderately increased output length. On 6 Table 2: The evaluation results of our method using Qwen2.5-1.5B-base, Qwen2.5-3B-base ScenariosMath500 GPQA Accuracy Reflection Accuracy Reflection Qwen2.5-1.5B-base 15.60 0.20 22.73 2.53 +EELo-CoT (Ours) 18.60 23.40 28.79 34.34 Qwen2.5-3B-base 27.80 0.80 27.27 5.05 +EELo-CoT (Ours) 31.20 33.40 28.28 27.27 the AMC23 benchmark, our rule also shows strong gains in reasoning quality. For Qwen2.5-7B- base, it improves accuracy from 45.00 %to 57.50 %, and self-reflection from 7.5 %to 70.00 %. On the math-tuned variant, our method achieves 65.00 %accuracy and the highest reflection rate of 95.00 %. GPQA presents a knowledge-intensive challenge. Our rule yields notable improvements, boosting accuracy from 30.30 %to 35.86 %on Qwen2.5-7B-base, and from 33.84 %to 37.88 %on Qwen2.5-7B-Math-base. In both cases, self-reflection rates increase to nearly 79 %, confirming the rule’s generalizability across reasoning domains. Applying to Other LLMs. The results of our method and baseline tested on other LLMs are presented in Table 2. We can see that Qwen2.5-1.5B-base model achieved an Accuracy of 18.60 %compared to the baseline 15.60
https://arxiv.org/abs/2505.17697v1
%and at the same time reflection rate increases dramatically from 0.20 %to 23.40 %, suggesting that the intervention effectively triggers self-monitoring behavior. Measured on the GPQA, Accuracy improves from 22.73 %to 28.79 %, showing a consistent positive effect on complex, knowledge-intensive question answering. Reflection rate increases from 2.53 % to 34.34 %, again indicating a strong promotion of reflective behaviors, which may correlate with improved reasoning depth. Similarly, Qwen2.5-3B-base model achieved higher accuracy than its baseline 27 %and with a much higher reflection rate on Math500 and GPQA respectively. 4 Parameter-efficient Training Method Since the long-CoT related activations are sparse and predictable, it is feasible to train a lightweight activation adaptation module specially for controlling these activations. As such a way is straightfor- ward to learn the key of long-CoT ability, it can be more efficient and effective. In this section, we develop a parameter-efficient training method, which updates 1.51 % of the total parameters and can perform better than LoRA (update 6.15 % parameters) and even full-parameter trained models. 4.1 Methodology In our method, we utilize few contrastive example pairs to help identify the key activations, and then train the activation amplification module with LoRA layers in the LLM. Activation Amplification Module. Our activation amplification module is actually a special gate layer that can amplify the activation value within a certain range. In the typical MLP layer of LLMs, the activation value is computed by first multiplying the input vector x∈Rhwith a projection matrix Wg∈Rh×dand then feeding it into the activation function e.g., ReLU and SiLU. A(x) = Act( xWg) (3) On top of this, we incorporate an activation amplifier module that consists of a linear projection matrix Wa∈Rh×nand a scaled sigmoid function, where ndenotes the number of identified key activations, i.e.,100. Given the input x, we multiply it with the projection matrix Wa, and feed it to the sigmoid function to normalize the value range to [0,1]. Then, we multiply it with a scale coefficient βto obtain the amplification scale vector for the nactivations. Next, we multiply the original activation values with the corresponding amplification scale in the vector. All the above modifications can be formulated as the following equation: A(x) = Act( xWg)⊙σ(xW a)·β (4) where Actis the activation function and σis the sigmoid function. In this way, this module enables adaptively control the amplification with the consideration of the input context. 7 Table 3: Performance comparison on Math500, AMC23, and GPQA benchmarks. Each benchmark includes both accuracy and average length. Method %param. Math500 AMC23 GPQA Accuracy Length Accuracy Length Accuracy Length Full Finetuning 100 91.60 3642.71 92.50 14170.80 69.19 7770.40 LoRA 6.15 91.60 3952.61 85.00 14827.93 66.17 8508.25 EELo-CoT (Ours) 1.51 90.20 3754.20 88.75 7077.48 70.02 8593.46 Parameter-efficient Training. During training, we add the activation amplification module in the last layer, and also add LoRA in other layers. Since our empirical findings have shown that most long-CoT related activations are in the last few layers, we reduce the rank of the former layer LoRA to 64, while existing work sets the rank to 256 for ensuring the effectiveness
https://arxiv.org/abs/2505.17697v1
[ 18]. Besides, since only Wa∈Rh×nand the scalar βin the amplification module are trainable, this module is also more parameter-efficient than LoRA, i.e.,(h×100 + 1) vs.(h+d)×256. 4.2 Validation Experiments Experimental Settings. To verify the effectiveness of our parameter-efficient training method, we fine-tune Qwen2.5-32B-Instruct on LIMO dataset [ 19]. LIMO contains 817 carefully curated training samples and is specifically designed to improve the model’s mathematical and logical reasoning abilities.For implementation, our method was evaluated against both LoRA and full-parameter fine- tuning baselines. For the LoRA baseline, we set the rank to 256 and the scaling factor αto 512, applying LoRA to all eligible layers in the model. In contrast, our method adopts a more parameter- efficient design by using a lower rank of 64 on the first 63 decoder layers. Additionally, we inject an Activation Amplification Module into the final MLP layer. All original model parameters, except parameters in LoRA and Amplification Module are frozen. The number of amplified key activations, n, is set to 100. Our fine-tuning process completed within 8 hours on 8 NVIDIA A100 GPUs. To ensure a fair comparison, we adhere to the evaluation protocol outlined in previous work [ 19], assessing all methods on three benchmarks: AMC23, GPQA, and Math500. Specifically, we use the pass@1 metric across all benchmarks under a zero-shot chain-of-thought (CoT) setting. For Math500 and GPQA, we evaluate correctness via greedy decoding with a single sample per question. For AMC23, we generate 16 samples per question with a temperature of 0.7 and compute the unbiased pass@1 metric as proposed by [ 20]. In addition to evaluating model accuracy on the benchmarks, we computed the average response tokens generated by models for each problem, including both the CoT reasoning process and the final answer. This metric is crucial for assessing inference efficiency. All evaluations are accelerated using vLLM [21] for efficient inference. Result Analysis. Our method demonstrates that training only 1.51% of model parameters can achieve equivalent performance to that of full-parameter fine-tuning across all three benchmarks. This highlights the sparsity and localization of parameters responsible for complex reasoning behaviors such as long chain-of-thought (CoT) and self-reflection. This observation challenges the conventional assumption that large-scale fine-tuning is always necessary to acquire complex reasoning abilities. Instead, our approach shows that only a small, identifiable set of parameters can elicit long CoT ability. Although the model is trained solely on a small math-focused dataset, it attains 70.02% accuracy on the science-oriented GPQA benchmark, outperforming both the LoRA baseline (66.17%) and full fine-tuning (69.19%). This indicates that the selected activation patterns support reasoning strategies that generalize beyond the training domain. Moreover, on the AMC23 benchmark, our method reduces the average number of tokens used during inference by approximately 50%. 5 Related Work Large Language Models. LLMs have demonstrated remarkable capabilities in a variety of NLP tasks [ 22,23,24], and the performance of cutting-edge models ( e.g., Gemini, Grok, and Qwen) has also developed rapidly [ 25,26,27]. Generally, LLMs are first pre-trained on large-scale unsupervised corpus, and then fine-tuned on instructions and human alignment data, to better adapt to various
https://arxiv.org/abs/2505.17697v1
tasks and applications [ 28,29]. To support more complex scenarios in real world, enhancing the 8 reasoning ability is the key challenge for LLMs [ 3]. Early work mainly focuses on optimizing the prompt engineerings such as chain-of-thought and tree-of-thought [ 5,30], or collect more reasoning-related data to fine-tune or continual pre-train the model [ 31,32,33,34,35]. However, collecting high-quality reasoning-related data is challenging, due to the difficult nature of reasoning problems. Other strategies are proposed to utilize tool augmentation [ 36], and non-training decoding optimization [ 37], to efficiently improve the reasoning performance. Recent directions from test-time search strategies [ 38,30,39] to RL-based training [ 40,41] have shown that LLMs can be pushed toward more deliberate reasoning. However, the internal processes that give rise to reasoning ability are still poorly understood. Several interpretability studies attempt to pinpoint few reasoning-related neurons [ 42], or turns to representation-space manipulations to steer coordinated neuron activity [ 43]. Long Chain-of-thought Ability. Recent “slow-thinking” models, e.g., OpenAI o1 [ 3], DeepSeek- R1 [1], and QwQ [ 2] have pushed the frontiers of LLM reasoning. These systems tackle complex problems by generating long chain-of-thoughts (long CoTs) that integrate planning, verification, and self-reflection strategies [ 5]. DeepSeek-R1 have shown that the reasoning proficiency can surface organically during RL training, e.g., Aha Moment. Even though the reward signal does not directly favor longer outputs, the models discover that richer, more elaborate chains of thought improve answer accuracy and therefore yield higher rewards [ 1]. Based on this finding, a surge of work optimizes the RL algorithm [ 44,45] and collects high-quality data with verifiable rewards for training [ 19,46,47], achieving remarkable performance on complex tasks like mathematics [ 48,49] and program synthesis [ 6,50]. Empirically, few work reveals that thousands of expert-checked supervised examples are sufficient to unlock long-form reasoning [ 19], and the resulting capability generalizes well to new tasks even without additional task-specific data [ 51]. Our work builds on these insights, but goes deeply to analyze how the long CoT ability is elicited in LLMs. Activation Dynamics and Interpretability in LLM. A growing body of work aims to open the “black box” of LLMs by identifying how facts and features are internally represented. Activation patching methods [52] selectively intervene activations to see how it affects the output, helping find which layer and neuron govern the outcome. Based on it, several work [ 53] has revealed that certain middle-layer feed-forward neurons are pivotal for factual predictions, whose activation is highly correlated with recalling a particular factual item. Subsequent work [ 54] has taken this further to efficient knowledge updates. They localize in a small subset of parameters in the mid-layer transformer blocks, and assert a new factual association while leaving unrelated outputs unaffected. Beyond factual knowledge, interpretability research [ 55] indicates that many abstract attributes of model outputs are tied to specific activation patterns or subspaces, e.g., concepts like truthfulness, humor, or tone. By projecting activations onto these directions and nudging them, one can increase or decrease the presence of the corresponding concept [ 12]. However, the activation
https://arxiv.org/abs/2505.17697v1
patterns underpinning reasoning processes in LLMs remain largely unexplored. Most interpretability work has focused on relatively static features ( e.g., factual knowledge, sentiment) or small-scale reasoning ( e.g., tracing one inference step). It is also unclear what activation patterns are responsible for long CoT reasoning. 6 Conclusion In this paper, we conducted extensive experiments to analyze how the long chain-of-thought (CoT) ability is elicited in LLMs. Our empirical results found that a small set of high-impact activations in the last few layers greatly contribute to the long CoT ability. By simply amplifying these activations and adding “wait” tokens, we invoked the long CoT ability with significantly increased length, self-reflection rate, and accuracy. In our findings, we also saw that the activation changes have predictable patterns, i.e.,a sharp rise after special tokens and then fast decay. Thus, we devised a general training-free activation control strategy, which can automatically adjust the activation value at inference time to efficiently elicit long CoT. Concretely, in our approach, we only need few contrastive examples to identify the relevant activations, and then leverage a simple analytic functions to fit the activation pattern. Experimental results have shown the effectiveness of our methods in improving the performance of LLMs and eliciting the long-CoT ability. In the future, we will apply our method into multimodal LLMs [ 56] (e.g., VLM and Video-LLM) and more complex planning tasks [57] ( e.g., visual reasoning and GUI operation). Besides, we will also test the generalizability of our method in non-transformer architectures ( e.g., Mamba [58]). 9 References [1]Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shirong Ma, Peiyi Wang, Xiao Bi, et al. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 , 2025. [2]Qwen Team. Qwq: Reflect deeply on the boundaries of the unknown. https://qwenlm. github.io/blog/qwq-32b-preview/ , 2024. Published November 28, 2024; accessed May 15, 2025. [3]Aaron Jaech, Adam Kalai, Adam Lerer, Adam Richardson, Ahmed El-Kishky, Aiden Low, Alec Helyar, Aleksander Madry, Alex Beutel, Alex Carney, et al. Openai o1 system card. arXiv preprint arXiv:2412.16720 , 2024. [4]Kimi Team, Angang Du, Bofei Gao, Bowei Xing, Changjiu Jiang, Cheng Chen, Cheng Li, Chenjun Xiao, Chenzhuang Du, Chonghua Liao, et al. Kimi k1. 5: Scaling reinforcement learning with llms. arXiv preprint arXiv:2501.12599 , 2025. [5]Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems , 35:24824–24837, 2022. [6]Haotian Xu, Xing Wu, Weinong Wang, Zhongzhi Li, Da Zheng, Boyuan Chen, Yi Hu, Shijia Kang, Jiaming Ji, Yingying Zhang, et al. Redstar: Does scaling long-cot data unlock better slow-reasoning systems? arXiv preprint arXiv:2501.11284 , 2025. [7]Edward Yeo, Yuxuan Tong, Morry Niu, Graham Neubig, and Xiang Yue. Demystifying long chain-of-thought reasoning in llms. arXiv preprint arXiv:2502.03373 , 2025. [8]Ping Yu, Jing Xu, Jason Weston, and Ilia Kulikov. Distilling system 2 into system 1. arXiv preprint arXiv:2407.06023 , 2024. [9]Shreyas Chaudhari, Pranjal Aggarwal, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, Karthik Narasimhan, Ameet Deshpande, and Bruno Castro da
https://arxiv.org/abs/2505.17697v1
Silva. Rlhf deciphered: A critical analysis of reinforcement learning from human feedback for llms. arXiv preprint arXiv:2404.08555 , 2024. [10] Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302 , 2023. [11] Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush V osoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, et al. Language models are multilingual chain-of-thought reasoners. arXiv preprint arXiv:2210.03057 , 2022. [12] Alexander Matt Turner, Lisa Thiergart, Gavin Leech, David Udell, Juan J Vazquez, Ulisse Mini, and Monte MacDiarmid. Activation addition: Steering language models without optimization. arXiv e-prints , pages arXiv–2308, 2023. [13] Mor Geva, Avi Caciularu, Kevin Ro Wang, and Yoav Goldberg. Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space. arXiv preprint arXiv:2203.14680 , 2022. [14] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS , 2021. [15] Qwen Team. Qwen2.5: A party of foundation models, September 2024. [16] Jacob Mitchell Springer, Sachin Goyal, Kaiyue Wen, Tanishq Kumar, Xiang Yue, Sadhika Malladi, Graham Neubig, and Aditi Raghunathan. Overtrained language models are harder to fine-tune. arXiv preprint arXiv:2503.19206 , 2025. 10 [17] David Rein, Betty Li Hou, Asa Cooper Stickland, Jackson Petty, Richard Yuanzhe Pang, Julien Dirani, Julian Michael, and Samuel R Bowman. Gpqa: A graduate-level google-proof q&a benchmark. In First Conference on Language Modeling , 2024. [18] Dacheng Li, Shiyi Cao, Tyler Griggs, Shu Liu, Xiangxi Mo, Eric Tang, Sumanth Hegde, Kourosh Hakhamaneshi, Shishir G Patil, Matei Zaharia, et al. Llms can easily learn to reason from demonstrations structure, not content, is what matters! arXiv preprint arXiv:2502.07374 , 2025. [19] Yixin Ye, Zhen Huang, Yang Xiao, Ethan Chern, Shijie Xia, and Pengfei Liu. Limo: Less is more for reasoning. arXiv preprint arXiv:2502.03387 , 2025. [20] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde De Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 , 2021. [21] Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Principles , pages 611–626, 2023. [22] OpenAI. Gpt-4 technical report. ArXiv , abs/2303.08774, 2023. [23] Anthropic. The claude 3 model family: Opus, sonnet, haiku. Technical report, Anthropic, March 2024. [24] Google Gemini Team. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. ArXiv , abs/2403.05530, 2024. [25] An Yang, Anfeng Li, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Gao, Chengen Huang, Chenxu Lv, Chujie Zheng, Dayiheng Liu, Fan Zhou, Fei Huang, Feng Hu, Hao Ge, Haoran Wei, Huan Lin, Jialong Tang, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jing Zhou, Jingren Zhou, Junyang Lin, Kai Dang, Keqin Bao, Kexin Yang, Le Yu, Lianghao Deng,
https://arxiv.org/abs/2505.17697v1
Mei Li, Mingfeng Xue, Mingze Li, Pei Zhang, Peng Wang, Qin Zhu, Rui Men, Ruize Gao, Shixuan Liu, Shuang Luo, Tianhao Li, Tianyi Tang, Wenbiao Yin, Xingzhang Ren, Xinyu Wang, Xinyu Zhang, Xuancheng Ren, Yang Fan, Yang Su, Yichang Zhang, Yinger Zhang, Yu Wan, Yuqiong Liu, Zekun Wang, Zeyu Cui, Zhenru Zhang, Zhipeng Zhou, and Zihan Qiu. Qwen3 Technical Report. arXiv e-prints , page arXiv:2505.09388, May 2025. [26] Anthropic. The claude 3 model family: Opus, sonnet, haiku. https://www-cdn.anthropic. com/de8ba9b01c9ab7cbabf5c33b80b7bbc618857627/Model_Card_Claude_3.pdf , March 2024. Model Card. [27] Koray Kavukcuoglu. Gemini 2.5: Our most intelligent ai model. https://blog.google/ technology/google-deepmind/gemini-model-thinking-updates-march-2025/ #gemini-2-5-thinking , March 2025. Blog post. [28] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neural information processing systems , 35:27730–27744, 2022. [29] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. [30] Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Ad- vances in neural information processing systems , 36:11809–11822, 2023. 11 [31] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language models. ArXiv , abs/2206.14858, 2022. [32] Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model for mathematics. ArXiv , abs/2310.10631, 2023. [33] Zhihong Shao, Peiyi Wang, Qihao Zhu, R. X. Xu, Jun-Mei Song, Mingchuan Zhang, Y . K. Li, Yu Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language models. ArXiv , abs/2402.03300, 2024. [34] K. Singhal, Shekoofeh Azizi, Tao Tu, Said Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Kumar Tanwani, Heather J. Cole-Lewis, Stephen J. Pfohl, P A Payne, Martin G. Seneviratne, Paul Gamble, Chris Kelly, Nathaneal Scharli, Aakanksha Chowdhery, P. A. Mans- field, Blaise Agera y Arcas, Dale R. Webster, Greg S. Corrado, Yossi Matias, Katherine Hui-Ling Chou, Juraj Gottweis, Nenad Tomaev, Yun Liu, Alvin Rajkomar, Jolle K. Barral, Christopher Semturs, Alan Karthikesalingam, and Vivek Natarajan. Large language models encode clinical knowledge. Nature , 620:172 – 180, 2022. [35] Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony S. Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language model for science. ArXiv , abs/2211.09085, 2022. [36] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435 , 2022. [37] Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, and Mike Lewis. Contrastive decoding: Open-ended text generation as optimization. In Annual Meeting of the Association
https://arxiv.org/abs/2505.17697v1
for Computational Linguistics , 2022. [38] Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhit- ing Hu. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992 , 2023. [39] Xinyu Guan, Li Lyna Zhang, Yifei Liu, Ning Shang, Youran Sun, Yi Zhu, Fan Yang, and Mao Yang. rstar-math: Small llms can master math reasoning with self-evolved deep thinking. arXiv preprint arXiv:2501.04519 , 2025. [40] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 , 2022. [41] Liangchen Luo, Yinxiao Liu, Rosanne Liu, Samrat Phatale, Meiqi Guo, Harsh Lara, Yunxuan Li, Lei Shu, Yun Zhu, Lei Meng, et al. Improve mathematical reasoning in language models by automated process supervision. arXiv preprint arXiv:2406.06592 , 2024. [42] Daking Rai and Ziyu Yao. An investigation of neuron activation as a unified lens to explain chain-of-thought eliciting arithmetic reasoning of llms. arXiv preprint arXiv:2406.12288 , 2024. [43] Xinyu Tang, Xiaolei Wang, Zhihao Lv, Yingqian Min, Wayne Xin Zhao, Binbin Hu, Ziqi Liu, and Zhiqiang Zhang. Unlocking general long chain-of-thought reasoning capabilities of large language models via representation engineering. arXiv preprint arXiv:2503.11314 , 2025. [44] Chen Li, Nazhou Liu, and Kai Yang. Adaptive group policy optimization: Towards stable training and token-efficient reasoning. arXiv preprint arXiv:2503.15952 , 2025. [45] Soham Sane. Hybrid group relative policy optimization: A multi-sample approach to enhancing policy optimization. arXiv preprint arXiv:2502.01652 , 2025. 12 [46] Yi Su, Dian Yu, Linfeng Song, Juntao Li, Haitao Mi, Zhaopeng Tu, Min Zhang, and Dong Yu. Expanding rl with verifiable rewards across diverse domains. arXiv preprint arXiv:2503.23829 , 2025. [47] Wasi Uddin Ahmad, Sean Narenthiran, Somshubra Majumdar, Aleksander Ficek, Siddhartha Jain, Jocelyn Huang, Vahid Noroozi, and Boris Ginsburg. Opencodereasoning: Advancing data distillation for competitive coding. arXiv preprint arXiv:2504.01943 , 2025. [48] Peiyi Wang, Lei Li, Zhihong Shao, RX Xu, Damai Dai, Yifei Li, Deli Chen, Yu Wu, and Zhifang Sui. Math-shepherd: Verify and reinforce llms step-by-step without human annotations. arXiv preprint arXiv:2312.08935 , 2023. [49] Lunjun Zhang, Arian Hosseini, Hritik Bansal, Mehran Kazemi, Aviral Kumar, and Rishabh Agarwal. Generative verifiers: Reward modeling as next-token prediction. arXiv preprint arXiv:2408.15240 , 2024. [50] Dayu Yang, Tianyang Liu, Daoan Zhang, Antoine Simoulin, Xiaoyi Liu, Yuwei Cao, Zhaopu Teng, Xin Qian, Grey Yang, Jiebo Luo, et al. Code to think, think to code: A survey on code-enhanced reasoning and reasoning-driven code intelligence in llms. arXiv preprint arXiv:2502.19411 , 2025. [51] Yifan Du, Zikang Liu, Yifan Li, Wayne Xin Zhao, Yuqi Huo, Bingning Wang, Weipeng Chen, Zheng Liu, Zhongyuan Wang, and Ji-Rong Wen. Virgo: A preliminary exploration on reproducing o1-like mllm. arXiv preprint arXiv:2501.01904 , 2025. [52] Fred Zhang and Neel Nanda. Towards best practices of activation patching in language models: Metrics and methods. arXiv preprint arXiv:2309.16042 , 2023. [53] Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. Transformer feed-forward layers are key-value memories. arXiv preprint arXiv:2012.14913 , 2020. [54] Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu
https://arxiv.org/abs/2505.17697v1
Wei. Knowledge neurons in pretrained transformers. arXiv preprint arXiv:2104.08696 , 2021. [55] Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt. Discovering latent knowledge in language models without supervision. arXiv preprint arXiv:2212.03827 , 2022. [56] Shuai Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Sibo Song, Kai Dang, Peng Wang, Shijie Wang, Jun Tang, Humen Zhong, Yuanzhi Zhu, Mingkun Yang, Zhaohai Li, Jianqiang Wan, Pengfei Wang, Wei Ding, Zheren Fu, Yiheng Xu, Jiabo Ye, Xi Zhang, Tianbao Xie, Zesen Cheng, Hang Zhang, Zhibo Yang, Haiyang Xu, and Junyang Lin. Qwen2.5-vl technical report. arXiv preprint arXiv:2502.13923 , 2025. [57] Tianbao Xie, Danyang Zhang, Jixuan Chen, Xiaochuan Li, Siheng Zhao, Ruisheng Cao, Toh Jing Hua, Zhoujun Cheng, Dongchan Shin, Fangyu Lei, Yitao Liu, Yiheng Xu, Shuyan Zhou, Silvio Savarese, Caiming Xiong, Victor Zhong, and Tao Yu. Osworld: Benchmarking multimodal agents for open-ended tasks in real computer environments, 2024. [58] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752 , 2023. 13 Table 4: The evaluation results of our method using Llama-3.1-8B-base on Math500 and GPQA. Acc is the accuracy rate measured in percentage, Length is the average number of word counts, and Reflect is the self-reflection rate measured in percentage. ScenariosMath500 GPQA Acc. Length Reflect Acc. Length Reflect Llama-3.1-8B-base 9.8 713.5 0.8 29.29 769.04 3.03 + Forcing Reflection 7.6 892.68 44.60 24.75 999.14 34.85 + Constant Intervention 5.0 982.81 1.2 26.77 708.55 2.53 + Forcing & Constant 3.8 954.97 27.4 23.23 802.88 26.26 + EELo-CoT (Ours) 11.8 783.09 38.40 30.81 965.46 32.32 Table 5: The evaluation results of our method using Qwen2.5-32B-base on AMC23 and GPQA ScenariosAMC23 GPQA Accuracy Reflection Accuracy Reflection Qwen2.5-32B-base 45.00 22.50 38.38 3.03 +EELo-CoT (Ours) 62.50 75.00 40.40 55.56 Technical Appendices and Supplementary Material A EELo-CoT on Other LLMs To further verify the effectiveness of our training-free EELo-CoT, we conduct experiments under the Llama-3.1-8B-base setting. The result is presented in Table. 4. LLaMA-3.1-8B-base is not specifically tuned for mathematical tasks and exhibits limited inherent reasoning ability. Consistent with earlier findings, as shown in the Table. 1, EELo-CoT outperforms all baselines and ablation variants across both Math500 and GPQA benchmarks. Specifically, on Math500, EELo-CoT achieves the highest accuracy 11.8% and a notable self-reflection rate 38.40%. Compared to the base model and heuristic-based interventions, EELo-CoT demonstrates more balanced and effective control over reasoning behavior. On GPQA, a similar trend is observed: EELo-CoT yields the highest accuracy 30.81% and a significantly elevated reflection rate 32.32% compared to the base model 29.29% accuracy and 3.03% reflection rate. Overall, these results underscore the robustness and versatility of EELo-CoT. Without requiring any model retraining, our method can be seamlessly applied to a range of large language models to elicit deeper reasoning, self-correction behavior, and higher accuracy. B EELo-CoT on Larger LLMs To evaluate the scalability and generalizability of our proposed training-free EELo-CoT method, we test the method on Qwen2.5-32B-base model. The result is presented in Table. 5. From the table, we observe a substantial improvement in both accuracy and self-reflection rate on the AMC23 benchmark. Accuracy improves from
https://arxiv.org/abs/2505.17697v1
45% to 62.5%, while the reflection rate increases dramatically from 22.5% to 75%. On GPQA, although the accuracy improves slightly from 38.38% to 40.40%, the reflection rate sees a significant jump from 3.03% to 55.56%, demonstrating the method’s ability to elicit reflective reasoning even in knowledge-heavy domains. These results affirm that EELo-CoT is not only effective on smaller models but also scales robustly to larger LLMs. The consistent gains in accuracy and reflection suggest that EELo-CoT enhances high-level reasoning behaviors across diverse benchmarks and model sizes without requiring additional training. C Visualization of Parameter-efficient Trained Activation In this part, we visualize the activation pattern from the trained Qwen2.5-32B-instruct model in Fig. 9. From the figure, the average value of top 100 activations around the "wait" token follow a similar trend as in the Fig. 8. This trend consistency substantiates the efficacy and robustness of our 14 50 0 50 100 Position relative to target token0.00.20.40.6Avg activation 'Wait' '1' 'the'Figure 8: Wait Token Activa- tion Trend from the R1 distilled Qwen Model 50 0 50 100 Position relative to token0.0000.0250.0500.0750.100Average activation 'Wait' TokenFigure 9: Wait Token Activation Trend from the Trained Instruct Model 0 2500 5000 7500 Token index05101520Activation valueDeepSeek-R1-Distill-Qwen-32B parameter-effecient trained modelFigure 10: Activation Patterns of Trained and Long CoT LLMs parameter-efficient training in boosting the activations at the self-reflection moment. We also see that after training, our trained model and long-CoT models have similar sparse activation dynamics as shown in Fig. 10. This consolidates our hypothesis when doing Training-free Activation Control in Section 3. D Case Study To better illustrate the effectiveness of our proposed training-free EELo-CoT, we present the case study on Math500 dataset in Table. 6. In this case, we can observe that before applying the ELLo-CoT, Qwen2.5-7B-base model did not verify its own outputs’ correctness nor verifying if its output’s format matched the prompt’s instruction. However, with the ELLo-CoT, the model activates the self-reflection, "Wait, I see the mistake here," at an important moment before outputting the final answer, which leads to a correct answer in the end. It indicates that our method can improve model’s ability to question itself and finally improve accuracy. E Limitations While our work presents a framework for eliciting long chain-of-thought (CoT) reasoning in LLMs via training-free activation control, several limitations remain: First, our activation intervention strategy is based on empirical findings—such as inserting the "wait" token or using digit-based triggers to activate self-reflection. These rules are simple yet effective. The simplicity of our approach, while a strength in terms of implementation and generalizability, may not fully capture the subtle activation that requires more nuanced stimulation. Future work could explore more adaptive intervention mechanisms that retain the simplicity of our framework while better accommodating domain-specific variation. Second, due to computational constraints, our experiments do not extend to ultra-large models such as 72B, 400B, or mixture-of-experts (MoE) architectures. While our study focuses on self-reflection—a key behavior linked to long chain-of-thought (CoT) reasoning—this emphasis stems from prior work suggesting that long CoT is an important indicator of a model’s ability to monitor and revise its
https://arxiv.org/abs/2505.17697v1
own reasoning. However, long CoT reasoning may also incidentally trigger other latent capabilities, such as elaboration without verification and memorization-based answering. Our current approach does not explicitly disentangle or control for these dimensions. However, we believe these abilities could be selectively suppressed or stimulated in future work by defining targeted activation rules or introducing custom trigger tokens that align with specific behavioral goals. 15 Table 6: Case Study on Math500 Question #3 Problem How many positive whole-number divisors does 196 have? Ground-Truth Solution 9 Generated Solution from Qwen2.5-7B-baseSolve the problem by reasoning step by step before providing the final answer. Explain each step clearly.Finally, provide your final answer in LaTeX format: Your answer Problem: How many positive whole-number divisors does 196 have? Step-by-Step Solution: Let’s think step by step: 1.Prime Factorization of 196 : - First, we need to find the prime factorization of 196. - Start by dividing 196 by the smallest prime number, 2: 196÷2 = 98 - Continue dividing by 2: 98÷2 = 49 - Now, 49 is not divisible by 2, so we move to the next smallest prime number, which is 3. However, 49 is not divisible by 3 either. - Next, we try dividing by 5, but 49 is not divisible by 5. - Finally, we try dividing by 7: 49÷7 = 7 - And again: 7÷7 = 1 - Therefore, the prime factorization of 196 is: 196 = 22×72 2.Using the Formula for Divisors : - The number of positive divisors of a number nwith prime factorization n=pe1 1×pe2 2× ··· × pek kis given by: (e1+ 1)( e2+ 1)···(ek+ 1) - For 196 = 22×72, we have e1= 2 ande2= 2. Generated Solution from Qwen2.5-7B-base EELo-CoTSolve the problem by reasoning step by stepbefore providing the final answer. Explain each step clearly.Finally, provide your final answer in LaTeX format: Your answer Problem: How many positive whole-number divisors does 196 have? Step-by-Step Solution: Let’s think step by step: 1.Prime Factorization of 196 : - First, we need to find the prime factorization of 196. - Start by dividing 196 by the smallest prime number, 2: 196÷2 = 98 - Continue dividing by 2: 98÷2 = 49 - Now, 49 is not divisible by 2, so we move to the next smallest prime number, which is 3. However, 49 is not divisible by 3 either. - Next, we try dividing by 5, but 49 is not divisible by 5. - Finally, we try dividing by 7: 49÷7 = 7 - And again: 7÷7 = 1 - Therefore, the prime factorization of 196 is: 196 = 22×72 2.Using the Formula for Divisors : - The number of positive divisors of a number nwith prime factorization n=pe1 1×pe2 2× ··· × pek kis given by: (e1+ 1)( e2+ 1)···(ek+ 1) - For 196 = 22×72, we have e1= 2 ande2= 2. Wait, I see the mistake here. The correct prime factorization is 196 = 22×72, and the formula should be applied correctly: (2 + 1)(2 + 1) = 3 ×3 = 9 3.Final Answer : - Therefore, the number
https://arxiv.org/abs/2505.17697v1
arXiv:2505.17701v1 [cs.LG] 23 May 2025COUNTDOWN: Contextually Sparse Activation Filtering Out Unnecessary Weights in Down Projection Jaewon Cheon Industrial and Management Engineering Korea University jaewon_cheon@korea.ac.krPilsung Kang* Industrial Engineering Seoul National University pilsung_kang@snu.ac.kr Abstract The growing size of large language models has created significant computational inefficiencies. To address this challenge, sparse activation methods selectively deactivates non-essential parameters during inference, reducing compu- tational costs in FFNN layers. While exist- ing methods focus on non-linear gating mecha- nisms, we hypothesize that the sparsity of the FFNN layer lies globally in the form of a linear combination over its internal down projection matrix. Based on this insight, we propose two methods: M-C OUNT DOWN , leveraging indi- rect coefficients, and D-C OUNT DOWN , utiliz- ing direct coefficients of the linear combina- tion. Experimental results demonstrate that D- COUNT DOWN can omit 90% of computations with performance loss as low as 5.5% ideally, while M-C OUNT DOWN provides a predictor- free solution with up to 29.4% better perfor- mance preservation compared to existing meth- ods. Our specialized kernel implementations effectively realize these theoretical gains into substantial real-world acceleration. 1 Introduction Large Language Models (LLMs) have demon- strated remarkable capabilities across diverse appli- cations, from handling specific tasks to orchestrat- ing agent-based operations (OpenAI et al., 2024; DeepSeek-AI et al., 2024; Gemma Team et al., 2025). However, these advancements came at the cost of dramatically increased model sizes, creating enormous computational and resource demands. The inference process has emerged as a particu- larly acute efficiency constraint, forming a critical bottleneck for deploying LLMs in practical appli- cations. This inefficiency is further amplified by recent trends in test-time scaling, where models generate extensive reasoning, significantly increas- ing computational demands during inference (Jang et al., 2024; Deng et al., 2024). Consequently, re- search on LLM inference efficiency has intensified, Globally zero-out area Locally zero-out area DDDD DDDDDDFigure 1: Comparison of sparsity determinations: our approach determines sparsity from the full FFNN com- putation (turquoise box), whereas conventional methods like CATS (Lee et al., 2024) rely solely on non-linear activations (red box). aiming to reduce latency and memory consump- tion while preserving generation quality (Liu et al., 2024; Kwon et al., 2023; Cai et al., 2024). In this context, sparse activation has emerged as a prominent strategy to improve FFNN layer efficiency in Transformer-based LLM (Liu et al., 2023; Lee et al., 2024; Akhauri et al., 2024; Al- izadeh et al., 2024). Sparse activation methods dynamically identify and deactivate parameters un- necessary for a given input, thereby reducing com- putational load and accelerating inference. These methods are particularly beneficial since FFNN layers incur significant computational overhead in modern LLM architectures (Awasthi et al., 2024). The zero-out gating property of ReLU (Agarap, 2019) creates extensive sparsity in FFNN layers by forcing a large portion of neurons to output zero (Mirzadeh et al., 2024). This natural sparsity makes computations associated with these zero-valued neurons completely redundant. Existing sparse 1 activation methods leverage this property to iden- tify and skip these unnecessary computations (Sun et al., 2024; Zhang et al., 2024). However, recent LLMs largely
https://arxiv.org/abs/2505.17701v1
employ activations such as GeLU or SiLU (Hendrycks and Gimpel, 2016; Elfwing et al., 2018) with far less prevalent zero-out behav- ior (Mirzadeh et al., 2024), limiting these methods’ applicability. Further, Gated-MLP structures, now widely adopted as FFNN layers (Shazeer, 2020; Dauphin et al., 2017), introduce more complex parameter interactions than standard architectures. This invalidates the assumption that sparsity occurs only around non-linear activations. To overcome these limitations, we propose an approach that defines sparsity from a global view, extending beyond the non-linear activations by re- formulating the FFNN layer’s output as a weighted sum, as illustrated in Figure 1. Based on this ap- proach, we derive two sparse activation methodolo- gies: MONO -COUNT DOWN (M-C OUNT DOWN ) andDUAL-COUNT DOWN (D-C OUNT DOWN ).M- COUNT DOWN identifies sparsity based on the out- put of a single weight matrix in Gated-MLP, while D-C OUNT DOWN leverages two weight matrices. In evaluations, M-C OUNT DOWN consistently out- performs the baseline method CATS (Lee et al., 2024), achieving up to 29.4% better performance preservation with comparable inference speed. D- COUNT DOWN attains greater efficiency gains, re- ducing computations by up to 90% in FFNN layers with performance loss as low as 5.5% under opti- mal conditions. The contributions of this paper are as follows. •We introduce a novel theoretical framework that redefines sparsity through a weighted- sum perspective over down projection matri- ces, extending beyond the conventional focus on activation functions. •We demonstrate that analyzing coefficient vec- tors in the weighted sum enables superior sparsity decisions, resulting in two distinct approaches with complementary strengths. •We provide practical acceleration through opti- mized kernel implementations, enabling both methods to achieve substantial throughput im- provements across multiple state-of-the-art LLM architectures. 2 Related Works ReLU-based Sparse Activation Early works on sparse activation primarily leveraged the prop-erty of ReLU to enhance computational efficiency. These approaches identified that ReLU activation functions naturally create substantial built-in spar- sity by producing zeros for negative values (Li et al., 2023b). Several approaches have tried to detect these zero-valued activations to preemp- tively skip associated computations, as these neu- rons would have no impact on subsequent layers (Mirzadeh et al., 2024). Deja Vu (Liu et al., 2023) extended this concept by training lightweight pre- dictors to anticipate which neurons would be ze- roed out, further improving efficiency. While these methods showed impressive speed gains with mini- mal performance loss, their application faced signif- icant constraints. Notably, these approaches were practical only on architectures explicitly designed with ReLU activations, limiting their applicability as LLMs increasingly adopted alternative activa- tion functions (Akhauri et al., 2024). Non-ReLU Sparse Activation As LLM evolved to favor non-ReLU activation functions such as GeLU and SiLU, which rarely produce exact zeros, new methods emerged to extend sparsity benefits to these architectures. One direction involved ReLUfi- cation techniques that replace non-ReLU functions with ReLU, enabling the reuse of existing sparsity strategies (Song et al., 2024b, 2025; Zhang et al., 2024; Alizadeh et al., 2024). Another approach, such as by CATS (Lee et al., 2024), redesigned sparsity criteria to identify and skip computations
https://arxiv.org/abs/2505.17701v1
associated with near-zero activations rather than exact zeros. While these adaptations improved compatibility with modern LLM architectures, they remain fundamentally constrained by their narrow focus on local patterns around non-linear trans- formations, overlooking potential sparsity from a global perspective of the FFNN layer. This local- ized perspective may fail to fully capitalize on the potential sparsity distributed throughout modern Gated-MLP architectures, particularly considering the complex interactions among multiple weight matrices that define these structures. 3 Generalization of Sparse Activation Problem Formulation A Gated-MLP block con- sists of three weight matrices: Wup, Wgate, Wdown ∈Rdmodel×dinter. For this block, the input vector x and the output vector yare inRdmodel. The com- putation involves intermediate states defined as u=x·Wup, h=σ(x·Wgate), s=u⊙hinRdinter. When no sparsification is applied, which we re- 2 fer to as the Dense scenario, all parameters are activated, and the operation proceeds as follows: y= (x·Wup)⊙σ(x·Wgate) ·W⊺ down(1) where σdenotes a non-linear activation function, typically GeLU or SiLU. We now introduce our sparsity propagation framework, establishing sparse activation from a global perspective. We can activate only a valuable subset of weight vectors, with a marginal perfor- mance loss. Specifically, sparse activation under our framework follows: y= (x·WI up)⊙σ(x·WI gate) · WI down⊺(2) where Idenotes the column of indices of the weights selected for computation: WI=W[:, IDX ], IDX =THLD (·)(3) where THLD is any function filtering effective I. Notably, when individual threshold functions are defined separately for each matrix, identical output can be achieved through the unified intersection IDX : IDX =IDX up∩IDX gate∩IDX down (4) Consequently, even when sparsifying just one matrix and keeping others dense, the computation remains equivalent to applying this unified IDX across all matrices, which we denotes as shared- index property. Thus, if valuable sparsity patterns are identified in one matrix, they can propagate throughout the entire Gated-MLP. A critical challenge, therefore, is defining the optimal filtering function THLD to identify the most effective index set IDX to preserve globally essential computations while significantly reduce computational overhead. Limitation of Comparative Methodology CATS (Lee et al., 2024) partially satisfies our sparsity propagation framework. It identifies sparsity by examining the activation magnitude h=σ(x·Wgate), assuming activations squashed near zero indicate parameters to omit. Specifically, given a sparsity ratio k∈(0,1), CATS computes a threshold τk Cvia the Quantile (k,|h|)operation, selecting a cutoff below which the lowest k fraction of activations is excluded. Based on this threshold, CATS defines a sparse activation index as shown in Equation 6a.CATS leverages the shared-index property. How- ever, since the optimal THLD might depend on factors beyond non-linear activation region, CATS is theoretically limited in propagating an opti- malIDX throughout the Gated-MLP. Addition- ally, although h[i]is large, if the corresponding u[i] =x·Wup[i]is near zero, the final contribu- tions become minimal, which ideally should be filtered out due to their elementwise product. Threshold Variants To overcome these limita- tions, we reformulate the Gated-MLP computation as a linear combination of the Wdown weight vec- tors, thereby exploring additional possibilities for defining THLD as follows: y= (x·Wup)⊙σ(x·Wgate) ·W⊺ down =X is[i]·W⊺ down[i](5) This reformulation allows us to interpret output y
https://arxiv.org/abs/2505.17701v1
as a weighted sum over W⊺ downrow vectors, where coefficient s[i] = (x·Wup)⊙σ(x·Wgate) reflects thei-th row vector’s contribution to computation. The magnitude of these coefficients provides a nat- ural metric for determining which parameters to activate, as they quantify each vector’s significance to the output. Furthermore, since sis calculated as the el- ementwise multiplication of u=x·Wupand h=σ(x·Wgate), these intermediate vectors can also serve as indirect coefficient signals. This gen- eralized view reveals that each computation stage in the Gated-MLP can provide a distinct sparsity indicator, with selecting has the basis being equiv- alent to CATS’s approach. THLDk C(h, τk C) ={i| |h[i]|> τk C} (6a) THLDk M(u, τk M) ={i| |u[i]|> τk M} (6b) THLDk D(s, τk D) ={i| |s[i]|> τk D} (6c) Based on this view, we propose two variants of sparse activation that extend beyond prior ap- proaches relying solely on the magnitude of h. As shown in Equation 6, the first method, M- COUNT DOWN , applies thresholding directly to vec- toru, while the second method, D-C OUNT DOWN , applies thresholding to s. For each method, thresh- oldsτk Mandτk Dare calculated via Quantile (k,|u|) and Quantile (k,|s|)respectively. These methods offer complementary strengths: M-C OUNT DOWN provides practical implementa- tion with minimal overhead by examining only one 3 matrix multiplication, while D-C OUNT DOWN can offer more precise sparsity identification through direct coefficients of the weighted sum. A detailed discussion of these methods follows in section 4 4 Realization of Sparse Activation SPIdealand SPPracIn the previous section, we focused on establishing THLD and the correspond- ing indicator that theoretically guarantee the safe omission of parameters. Ideally, if these indica- tors are tractable in real-time inference, we can achieve the upper-bound performance defined by the method. However, accessing the indicator and deriving IDX from it is not trivial. Given this constraint, we distinguish between two distinctive perspectives: SPIdealexamines the theoretical upperbound performance achievable by each method, assuming that filtering based on sparsity indicators incurs no computational over- head. SPPracaccounts for real-world deployment constraints, particularly the latency of identifying sparse activation patterns. It evaluates whether methods can deliver actual inference speedups when all practical overheads are considered. The distinction is critical because methods with strong SPIdealperformance may not translate to SPPracbenefits if their practical implementation is computationally expensive. Conversely, focusing solely on SPPracwithout understanding the theoret- icalSPIdeallimits can lead to suboptimal solutions that fail to approach the best possible performance. Effective sparse activation requires both identify- ing truly essential computations via SPIdealand creating an efficient implementation to realize total computational savings through SPPrac. Constructing SPPracfor COUNT DOWN We now describe how to transform the theoretical SPIdealformulations of M-C OUNT DOWN andD- COUNT DOWN into efficient, practical SPPracim- plementations. ForM-C OUNT DOWN , the implementation is straightforward because its indicator udepends only on the matrix Wup. Therefore, its index set IDXk Mdefined in its SPIdealperspective can be obtained independently of other matrices in the Gated-MLP. This allows M-C OUNT DOWN to oper- ate without additional inference-time components, as computation over the remaining matrices can be selectively skipped based on u. To further
https://arxiv.org/abs/2505.17701v1
reduce overhead, we avoid comput- ingτk Mdynamically for each input. Instead, weapproximate it with a layerwise constant ˆτk M= 1 TPT t=1Quantile (k,|u(t)|)estimated during a cal- ibration phase with Tsampled inputs. In contrast, implementing D-C OUNT DOWN poses greater challenges because its indicator s requires nearly the entire Gated-MLP computation, negating the advantages of sparse activation. To tackle this challenge, we train a lightweight pre- dictor that estimates the optimal index set IDXk D directly from input x, avoiding the need to compute sduring inference. For each layer, the predictor outputs a score vector ˆswhere: ˆs[i] =( +∞ if|s[i]|>Quantile (k,|s|) −∞ otherwise Using this output, we define the predicted index set asdIDXk D={i|ˆs[i]>0}and activate only the corresponding weight columns during inference. For efficiency, the predictor must be highly accu- rate and computationally inexpensive during in- ference. Following (Liu et al., 2023; Alizadeh et al., 2024), we employ a low-rank approxima- tor consisting of two matrices: θA∈Rdmodel×drank andθB∈Rdrank×dinter, minimizing computational overhead while preserving prediction accuracy. al- gorithm 1 details the complete training procedure. Kernel Design Once the sparse activation index setIDX is determined, computation can be re- stricted to only the corresponding subset of weights, reducing the actual floating-point operation count (FLOPs). However, reducing FLOPs does not nec- essarily translate to improved inference latency. For instance, materializing an indexed weight ma- trix and performing standard vector-matrix multi- plication may still reduce FLOPs, but at the cost of increased memory access (Song et al., 2024a; Xue et al., 2024). Therefore, sparse computation should avoid incurring excessive memory traffic solely for the sake of reducing arithmetic operations. To address this, we implement custom kernels for both M-C OUNT DOWN andD-C OUNT DOWN using Triton (Tillet et al., 2019). The M- COUNT DOWN kernel builds upon CATS’s struc- ture (Lee et al., 2024), but optimizes it by fusing the non-linear activation to reduce additional memory access. For D-C OUNT DOWN , we design a kernel that efficiently supports predictor-based activation patterns. A naive implementation would require eight separate kernel launches for sparse computa- tion: indexing and GEMV for each of the three ma- trices, plus non-linear activation and elementwise 4 DD 101 1100 1 DD DD DD Activated Weight Omitted WeightNon-zero Values Zero ValuesMath Ops. From to Construct / Propagate (a) (b) DD DDDD 001 1101 1 DDDD DDDDFigure 2: COUNT DOWN Pipeline. Note that hpre=x·Wgate.Left (a): InM-C OUNT DOWN , we determine which parameters to activate by binarizing densely computed uwith pre-calculatedˆτk M.Right (b): InD-C OUNT DOWN , low-rank predictors θA,θB determine which parameters to activate. multiplication. Our implementation compresses this workload into just two kernels. This design ensures that FLOPs reductions directly translate into improved token throughput. Full implementa- tion details and pseudocode are in algorithm 2 and algorithm 3. 5 Experiments Experimental Setup We evaluate the proposed methods against other sparse activation baselines, primarily CATS (Lee et al., 2024) and Deja Vu (Liu et al., 2023). We also include a Dense vari- ant without any sparse activation for comparison. Experiments are conducted using four diverse state- of-the-art LLMs ranging from 8B to 14B
https://arxiv.org/abs/2505.17701v1
param- eters: Llama-3.1-8B-Instruct (Grattafiori et al., 2024), gemma-2-9b-it (Gemma Team et al., 2024), Qwen2.5-14B-Instruct (Qwen et al., 2024), and phi- 4 (Abdin et al., 2024). We test multiple sparsity ratios by varying kfrom 0.7 to 0.9, representing the fraction of parameters excluded from compu- tation. Implementation details are provided in Ap- pendix A.2. We examine both model performance preser- vation and computational efficiency. For model performance, we use the lm-eval-harness (Gao et al., 2024) framework to assess downstream tasks including ARC (Clark et al., 2018), HellaSwag (Zellers et al., 2019), PIQA (Bisk et al., 2020),OpenbookQA (Mihaylov et al., 2018), TruthfulQA (Lin et al., 2022), WinoGrande (Sakaguchi et al., 2020), and GSM8K (Cobbe et al., 2021). Unlike prior sparse activation studies, we also evaluate con- versational ability using LLM-as-a-Judge frame- work AlpacaEval 2.0 (Li et al., 2023a). To assess computational efficiency and inference speed, we benchmark kernel-level latency to quan- tify Gated-MLP speedups from sparse activation. We also measure end-to-end token throughput and analyze theoretical reductions in floating-point op- erations (FLOPs) and memory traffic. Downstream Task Performance As shown in Table 1, in the SPIdealsetting, D-C OUNT DOWN consistently outperforms all methods across all models and sparsity ratios, exhibiting negligible degradation even when compared to the dense base- line. This demonstrates the effectiveness of D- COUNT DOWN ’s sparsity criterion: the indicator saccurately reflects each parameter’s importance to the final output, serving as the coefficient in our linear combination formulation. This provides more informed filtering than methods like CATS which rely solely on gating magnitude. Even at 90% sparsity, D-C OUNT DOWN retains only the most impactful neurons, limiting performance drop to 5.5% in the best case among evaluated models. M-C OUNT DOWN , although less effective than D-C OUNT DOWN , consistently outperforms CATS. 5 InferenceModeLlama-3.1-8B-Instruct gemma-2-9b-it Qwen2.5-14B-Instruct phi-4 k=0.7k=0.8k=0.9k=0.7k=0.8k=0.9k=0.7k=0.8k=0.9k=0.7k=0.8k=0.9 Dense Full 0.616 0.645 0.674 0.655 SPIdeal DEJA VU 0.314 0.315 0.322 0.360 0.360 0.360 0.379 0.382 0.385 0.398 0.405 0.396 CATS 0.471 0.412 0.337 0.592 0.483 0.367 0.502 0.428 0.389 0.615 0.535 0.427 M-C OUNT DOWN 0.570 0.513 0.421 0.624 0.607 0.549 0.644 0.610 0.479 0.636 0.608 0.512 D-C OUNT DOWN 0.603 0.587 0.525 0.635 0.625 0.590 0.660 0.647 0.555 0.651 0.649 0.620 SPPrac CATS 0.504 0.450 0.350 0.605 0.502 0.360 0.556 0.478 0.390 0.633 0.591 0.448 M-C OUNT DOWN 0.571 0.528 0.447 0.632 0.617 0.588 0.651 0.624 0.535 0.639 0.620 0.555 D-C OUNT DOWN 0.442 0.419 0.387 0.555 0.563 0.520 0.526 0.457 0.437 0.499 0.445 0.417 Table 1: Average SPIdealandSPPracscores compared to Dense across all downstream tasks. Full task-wise results are provided in Appendix C. The gap between the two widens as the sparsity ratio increases, reaching over 29.4%. This demon- strates that M-C OUNT DOWN ’s indicator uis more predictive of useful computation than CATS’ indi- cator h. This may seem counterintuitive since u andhcontribute symmetrically via their element- wise product and thus should be equally informa- tive. We revisit this comparison in section 6. Deja Vu, which assumes ReLU-style zero-out behavior, suffers severe degradation in the SPIdeal setting. Given its reliance on predictors, which would further degrade under
https://arxiv.org/abs/2505.17701v1
the SPPracsetting, we excluded it from subsequent experiments. In the SPPracsetting, D-C OUNT DOWN expe- riences performance loss relative to the SPIdeal due to predictor sub-optimality, suggesting bet- ter prediction strategies are needed to fully real- ize its potential in deployment. In contrast, M- COUNT DOWN , thanks to its predictor-free design, exhibits nearly identical performance to its SPIdeal counterpart. Notably, M-C OUNT DOWN continues to outperform CATS across all sparsity settings, re- inforcing the effectiveness of its signal even under realistic constraints. LLM Chat Performance While prior studies rely on downstream task accuracy or perplexity, these metrics often fail to capture conversational performance. To address this, we evaluate each method using an LLM-as-a-Judge framework that directly assesses chat-level performance. As shown in Table 2, M-C OUNT DOWN main- tains nearly identical performance between the SPIdealandSPPracsettings, while also outperform- ing CATS in both. D-C OUNT DOWN exhibits no- ticeable degradation in SPPracdue to predictor lim- itations, but retains a dominant lead under SPIdeal. This trend aligns with the results observed in thedownstream task evaluations. InferenceModeAlpacaEval 2.0 k= 0.7k= 0.8k= 0.9 SPIdeal CATS 25.10 1.72 0.19 M-C OUNT DOWN 45.84 29.22 3.90 D-C OUNT DOWN 48.86 45.85 29.95 SPPrac CATS 31.63 10.47 0.25 M-C OUNT DOWN 38.31 33.80 15.88 D-C OUNT DOWN 3.40 2.81 1.16 Table 2: Average SPIdealandSPPracwin rates against Dense across all models. Full model-wise results are provided in Table 7. Efficiency and Speed To confirm that reduc- tions in computation indeed translate into infer- ence speedups, we measured kernel-level execu- tion latency under various sparsity ratios. Each ker- nel’s execution time was recorded from the start of the Gated-MLP computation, explicitly excluding other operations like token embedding or attention mechanisms. This allowed us to isolate the precise efficiency gains attributable to sparse activation. As shown in Figure 3, D-C OUNT DOWN achieves the fastest kernel execution time overall, despite the presence of a predictor, by skipping all three weight matrix computations. Although both M-C OUNT DOWN and CATS are predictor- free, M-C OUNT DOWN slightly outperforms CATS in kernel speed. Given that the only architectural difference between their kernels is whether the non- linear activation function is fused, this suggests thatM-C OUNT DOWN gains a minor but consistent speed advantage by fusing the activation computa- tion, thereby reducing memory traffic and avoiding additional overhead. 6 Figure 3: Kernel Speed for Llama-3.1-8B-Instruct. CATS, M-C OUNT DOWN andD-C OUNT DOWN show their respective SPPrackernel speeds, Full and Optimal show Dense while int(dinter×k)instead of dinter for the Optimal. Results for other models are in Figure 5. Furthermore, we measured average tokens gen- erated per second for generation lengths of 512 and 1024, providing a model-level speedup assess- ment in typical generation scenarios. As shown in Table 3, M-C OUNT DOWN achieves the high- est end-to-end token throughput. Meanwhile, D- COUNT DOWN demonstrates the best performance at the kernel level, and with further optimization, its overall throughput may be further enhanced. 6 Analysis M-C OUNT DOWN vs CATS While CATS and M-C OUNT DOWN share similar core ideas for sparse activation, our experimental results
https://arxiv.org/abs/2505.17701v1
show thatM-C OUNT DOWN consistently achieves better performance. To understand the performance gap between the indirect coefficient vectors uandh, we conduct a comparative analysis of how each influences and aligns with the oracle-like refer- ence signal s, the direct coefficient used in D- COUNT DOWN . To enable direct comparison, we define binary masks Sk,Uk, and Hkbased on the top- kmag- nitude entries of each vector. Each mask marks components as “alive” (1) if they survive quantile thresholding, and "dead" (0) otherwise. These bi- nary masks are equivalent to the index sets IDXk used for sparse activation, as each represents the support of the corresponding IDXkin vector form. We first define a metric called Comparative Influential Factor (CIF) to measure how much influence u(orh) has on the final decision of s, es- pecially in cases where it overrides the other com- ponent. Analogously, for instance, CIFk(u,alive)k Method FLOPs(M) Mem.(MB)Throughput 512 1024 0.0 Dense 352.41 168.121 24.64 22.63 0.7CATS 188.00 89.746 32.62 29.40 MC 187.95 89.719 33.61 30.32 DC 124.59 59.480 30.69 27.80 0.8CATS 164.52 78.550 32.72 29.60 MC 164.46 78.522 33.80 30.61 DC 89.37 42.684 30.70 27.57 0.9CATS 141.02 67.345 32.98 29.81 MC 140.96 67.318 33.51 30.78 DC 54.11 25.877 30.73 27.55 Table 3: Theoretical FLOPs and Memory Traffic of Gated-MLP and actual throughput per second at se- quence lengths 512 and 1024 for Llama-3.1-8B-Instruct (dmodel = 4096 ,dinter= 14336 ). MC and DC refer to M-C OUNT DOWN and D-C OUNT DOWN respectively. measures how often u“rescues” a component that would otherwise have been pruned by h, allow- ing it to survive in sdue to its strong contribution. Formally, this is computed as: CIFk(u,alive) =|Sk∧ ¬Hk| |Sk|(7) This formulation follows from the definition of sas the elementwise product of uandh. When s[i]is alive but h[i]is small enough to be pruned, it implies that u[i]must have been large enough to compensate, effectively “saving” that entry. Next, we define the Comparative Agreement Factor (CAF) to evaluate how often one signal aligns with swhile the other disagrees. For in- stance, CAFk(u,alive)measures how frequently u agrees with son keeping a component, specifically when hdisagrees. This is given by: CAFk(u,alive) =|Sk∧ ¬Hk∧Uk| |Sk|(8) Both CIF and CAF can also be defined symmet- rically for the “dead” case by inverting the roles of activation and pruning. As shown in Figure 4, uoutperforms hacross all sparsity levels in both CIF and CAF. These results suggest that umore closely reflects the true acti- vation behavior captured by sand exerts a greater direct impact on sparsity decisions than h. In other words, uis more effective at preserving impor- tant activations and filtering out unimportant ones, explaining M-C OUNT DOWN ’s stable and reliable performance under sparsity. Nevertheless, since M-C OUNT DOWN still relies on an indirect coefficient u, it cannot fully match 7 Figure 4: Tornado plots of CIF and CAF across whiten- ing ratios. Bars to the right indicate the propor- tion of CIFk(·, alive ), while those to the left indicate CIFk(·, dead ). the upper-bound performance of D-C OUNT DOWN , which uses the full signal sdirectly.
https://arxiv.org/abs/2505.17701v1
Despite u’s strong CIF and CAF scores, substantial mismatches with respect to the oracle mask Skremain, with peak CIF values reaching only about 0.6 and CAF values about 0.4, underscoring the need for future work to translate D-C OUNT DOWN ’s upper-bound potential into SPPracdeployments. Possible Predictor Candidate: TernaryLin- ear D-C OUNT DOWN demonstrates a theoreti- cally sound and effective sparse activation strategy, achieving strong performance in the SPIdealset- ting. However, in the SPPracscenario, performance degradation occurs due to the predictor’s limited ac- curacy in recovering optimal sparsity patterns. This reflects the difficulty of the prediction task rather than a flaw in the sparsity criterion itself. The task simultaneously demands precision and computa- tional efficiency, presenting a significant challenge with considerable room for improvement. To empirically explore this potential, we evalu- ate an alternative predictor architecture, Ternary- Linear , whose weights are quantized as θternary∈ {−1,0,+1}dmodel×dinter. We compare its perfor- mance with the previously utilized low-rank ap- proximator. TernaryLinear achieves significant parameter compression by sacrificing numerical precision while preserving the matrix rank struc- ture. Motivated by recent studies demonstrating successful LLM pretraining with ternary quanti- zation while retaining strong model performance (Ma et al., 2024), we regard TernaryLinear as a promising candidate due to its demonstrated ex- pressiveness even under aggressive quantization. As shown in Table 4, TernaryLinear outperforms the low-rank baseline in F1 score, while also beingMetric TernaryLinear Low-Rank Latency (ms) 0.082 0.030 Theoretical footprint (MiB) 112 144 F1-score (%) 0.435 0.403 Table 4: Comparison between TernaryLinear and the Low-Rank Approximator. Latency for TernaryLinear was measured using the BitBLAS library (Wang et al., 2024). F1 score is reported as the average binary classifi- cation performance on S0.7across all evaluated models. more compact in terms of memory footprint. This suggests that preserving rank information, even at the cost of numerical precision, is more effective for sparse mask recovery than the reverse approach. However, TernaryLinear has not yet been adopted due to its relatively slower runtime despite its small size. This limitation stems not from algo- rithmic complexity, but rather the lack of optimized GPU kernel support for ultra-low-precision oper- ations. Prior work (Ma et al., 2025) suggests that future advances in kernel optimization and ultra- low-bit quantization are needed to fully leverage such architectures. With these improvements, tech- niques like TernaryLinear could become viable can- didates for enabling D-C OUNT DOWN to achieve its full SPIdealperformance in SPPracscenarios. 7 Conclusion We introduce COUNT DOWN , a novel sparse ac- tivation framework for improving inference effi- ciency of large language models. To overcome the limitations of traditional non-linear activation- based sparsity, we reformulate the computation as a weighted sum over the FFNN’s down projection matrix, effectively capturing inherent sparsity in modern Gated-MLP architectures. From this per- spective, we present two complementary strategies: M-C OUNT DOWN , which uses uderived from a sin- gle matrix Wupas its activation indicator, achieves faster inference and better performance preserva- tion than prior state-of-the-art methods while re- maining predictor-free. D-C OUNT DOWN directly leverages s, the coefficient vector of the weighted sum, for fine-grained sparsity selection, demon- strating
https://arxiv.org/abs/2505.17701v1
robust performance even when skipping 90% of computations under ideal conditions. Limitations Like most prior work on sparse activation, our study assumes a single-batch greedy decoding set- ting in on-device environments. While this scenario 8 is realistic for latency-sensitive edge inference, it may be less applicable in multi-batch or server- based deployments. In such cases, strategies such as computing the union of predicted index sets IDX across multiple samples could be explored. However, such an approach would require further investigation into how much parameter activation can be shared across inputs, a direction we leave for future work. Additionally, our sparsity criteria rely exclu- sively on activation magnitude. This choice of- fers clear interpretability and aligns well with the weighted-sum perspective we adopt. Nevertheless, alternative sparsity metrics, such as those explored by (Akhauri et al., 2024), remain an open research avenue. Expanding beyond simple magnitude- based thresholding could further enhance the per- formance of sparse activation methods. Ethical Considerations We affirm adherence to the ACL Rolling Review (ARR) ethical guidelines, explicitly addressing potential risks and responsible research practices. This research focuses on optimizing computational efficiency in large language models (LLMs), aimed at reducing resource usage and consequently low- ering environmental impact. We foresee no direct risks or potential harms to individuals or communi- ties resulting from this work. Comprehensive details regarding the ethical use of scientific artifacts, reproducibility of computa- tional experiments, and related considerations are thoroughly documented in Appendix A. References Marah Abdin, Jyoti Aneja, Harkirat Behl, Sébastien Bubeck, Ronen Eldan, Suriya Gunasekar, Michael Harrison, Russell J. Hewett, Mojan Javaheripi, Piero Kauffmann, James R. Lee, Yin Tat Lee, Yuanzhi Li, Weishung Liu, Caio C. T. Mendes, Anh Nguyen, Eric Price, Gustavo de Rosa, Olli Saarikivi, and 8 others. 2024. Phi-4 technical report. Preprint , arXiv:2412.08905. Abien Fred Agarap. 2019. Deep learning using rectified linear units (relu). Preprint , arXiv:1803.08375. Yash Akhauri, Ahmed F AbouElhamayed, Jordan Dotzel, Zhiru Zhang, Alexander M Rush, Safeen Huda, and Mohamed S Abdelfattah. 2024. Shad- owLLM: Predictor-based contextual sparsity for large language models. In Proceedings of the 2024 Confer- ence on Empirical Methods in Natural Language Pro- cessing , pages 19154–19167, Miami, Florida, USA. Association for Computational Linguistics.Keivan Alizadeh, Iman Mirzadeh, Dmitry Belenko, S Karen Khatamifard, Minsik Cho, Carlo C Del Mundo, Mohammad Rastegari, and Mehrdad Fara- jtabar. 2024. LLM in a flash: Efficient large language model inference with limited memory. In ACL. Pranjal Awasthi, Nishanth Dikkala, Pritish Kamath, and Raghu Meka. 2024. Learning neural networks with sparse activations. In The Thirty Seventh Annual Con- ference on Learning Theory, June 30 - July 3, 2023, Edmonton, Canada , volume 247 of Proceedings of Machine Learning Research , pages 406–425. PMLR. Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: Reasoning about physical commonsense in natural language. In The Thirty-Fourth AAAI Conference on Artificial Intelli- gence, AAAI 2020, The Thirty-Second Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020 , pages 7432– 7439. AAAI Press.
https://arxiv.org/abs/2505.17701v1
Tianle Cai, Yuhong Li, Zhengyang Geng, Hongwu Peng, Jason D Lee, Deming Chen, and Tri Dao. 2024. Medusa: Simple LLM inference acceleration frame- work with multiple decoding heads. In Forty-first In- ternational Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024 . OpenRe- view.net. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803. 05457 . Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, and Others. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110. 14168 . Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated con- volutional networks. In Proceedings of the 34th In- ternational Conference on Machine Learning , vol- ume 70 of Proceedings of Machine Learning Re- search , pages 933–941. PMLR. DeepSeek-AI, Aixin Liu, Bei Feng, Bing Xue, Bingx- uan Wang, Bochao Wu, Chengda Lu, Chenggang Zhao, Chengqi Deng, Chenyu Zhang, Chong Ruan, Damai Dai, Daya Guo, Dejian Yang, Deli Chen, Dongjie Ji, Erhang Li, Fangyun Lin, Fucong Dai, and 181 others. 2024. DeepSeek-V3 technical report. arXiv [cs.CL] . Yuntian Deng, Yejin Choi, and Stuart M Shieber. 2024. From explicit CoT to implicit CoT: Learning to inter- nalize CoT step by step. CoRR , abs/2405.14838. Stefan Elfwing, Eiji Uchibe, and Kenji Doya. 2018. Sigmoid-weighted linear units for neural network 9 function approximation in reinforcement learning. Neural Networks , 107:3–11. Leo Gao, Jonathan Tow, Baber Abbasi, Stella Bider- man, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell, Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf, Aviya Skowron, Lintang Sutawika, and 5 others. 2024. A framework for few-shot language model evaluation. Gemma Team, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Rivière, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean-Bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Etienne Pot, Ivo Penchev, and 197 others. 2025. Gemma 3 technical report. arXiv [cs.CL] . Gemma Team, Morgane Riviere, Shreya Pathak, Pier Giuseppe Sessa, Cassidy Hardin, Surya Bhupati- raju, Léonard Hussenot, Thomas Mesnard, Bobak Shahriari, Alexandre Ramé, Johan Ferret, Peter Liu, Pouya Tafti, Abe Friesen, Michelle Casbon, Sabela Ramos, Ravin Kumar, Charline Le Lan, Sammy Jerome, and 179 others. 2024. Gemma 2: Improv- ing open language models at a practical size. arXiv [cs.CL] . Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schel- ten, Alex Vaughan, Amy Yang, Angela Fan, Anirudh Goyal, Anthony Hartshorn, Aobo Yang, Archi Mi- tra, Archie Sravankumar, Artem Korenev, Arthur Hinsvark, and 542 others. 2024. The llama 3 herd of models. arXiv [cs.AI] . Dan Hendrycks and Kevin Gimpel. 2016. Gaussian error linear units (GELUs). arXiv [cs.LG] . Joonwon Jang, Jaehee Kim, Wonbin Kweon, and Hwanjo Yu. 2024. Verbosity-aware rationale re- duction: Effective reduction of redundant rationale via principled criteria. arXiv preprint arXiv:2412. 21006 . Woosuk
https://arxiv.org/abs/2505.17701v1
Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph Gonza- lez, Hao Zhang, and Ion Stoica. 2023. Efficient mem- ory management for large language model serving with PagedAttention. In Proceedings of the 29th Sym- posium on Operating Systems Principles, SOSP 2023, Koblenz, Germany, October 23-26, 2023 , pages 611– 626. ACM. Donghyun Lee, Jaeyong Lee, Genghan Zhang, Mo Ti- wari, and Azalia Mirhoseini. 2024. CATS: Context- aware thresholding for sparsity in large language models. In First Conference on Language Model- ing. Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, andTatsunori B. Hashimoto. 2023a. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval . Zonglin Li, Chong You, Srinadh Bhojanapalli, Daliang Li, Ankit Singh Rawat, Sashank J. Reddi, Ke Ye, Felix Chern, Felix X. Yu, Ruiqi Guo, and Sanjiv Kumar. 2023b. The lazy neuron phenomenon: On emergence of activation sparsity in transformers. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023 . OpenReview.net. Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022 , pages 3214–3252. Association for Computational Linguistics. Jiahao Liu, Qifan Wang, Jingang Wang, and Xunliang Cai. 2024. Speculative decoding via early-exiting for faster LLM inference with thompson sampling control mechanism. In Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11-16, 2024 , pages 3027–3043. Association for Computational Linguistics. Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Ré, and Beidi Chen. 2023. Deja vu: Contextual sparsity for efficient LLMs at inference time. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA , volume 202 of Proceedings of Machine Learning Research , pages 22137–22176. PMLR. Shuming Ma, Hongyu Wang, Shaohan Huang, Xingxing Zhang, Ying Hu, Ting Song, Yan Xia, and Furu Wei. 2025. Bitnet b1.58 2b4t technical report. Preprint , arXiv:2504.12285. Shuming Ma, Hongyu Wang, Lingxiao Ma, Lei Wang, Wenhui Wang, Shaohan Huang, Li Dong, Ruiping Wang, Jilong Xue, and Furu Wei. 2024. The era of 1-bit LLMs: All large language models are in 1.58 bits. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question an- swering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018 , pages 2381–2391. Association for Computational Linguistics. Seyed-Iman Mirzadeh, Keivan Alizadeh-Vahid, Sachin Mehta, Carlo C del Mundo, Oncel Tuzel, Golnoosh Samei, Mohammad Rastegari, and Mehrdad Fara- jtabar. 2024. ReLU strikes back: Exploiting activa- tion sparsity in large language models. In The Twelfth 10 International Conference on Learning Representa- tions, ICLR 2024, Vienna, Austria, May 7-11, 2024 . OpenReview.net. OpenAI, Aaron Hurst, Adam Lerer, Adam P Goucher, Adam Perelman, Aditya Ramesh, Aidan Clark, A J Ostrow, Akila Welihinda, Alan Hayes, Alec
https://arxiv.org/abs/2505.17701v1
Radford, Aleksander M ˛ adry, Alex Baker-Whitcomb, Alex Beu- tel, Alex Borzunov, Alex Carney, Alex Chow, Alex Kirillov, Alex Nichol, and 400 others. 2024. GPT-4o system card. arXiv [cs.CL] . Qwen, An Yang, Baosong Yang, Beichen Zhang, Binyuan Hui, Bo Zheng, Bowen Yu, Chengyuan Li, Dayiheng Liu, Fei Huang, Haoran Wei, Huan Lin, Jian Yang, Jianhong Tu, Jianwei Zhang, Jianxin Yang, Jiaxi Yang, Jingren Zhou, Junyang Lin, and 24 others. 2024. Qwen2.5 technical report. arXiv [cs.CL] . Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavat- ula, and Yejin Choi. 2020. WinoGrande: An adver- sarial winograd schema challenge at scale. In The Thirty-Fourth AAAI Conference on Artificial Intelli- gence, AAAI 2020, The Thirty-Second Innovative Ap- plications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020 , pages 8732– 8740. AAAI Press. Noam Shazeer. 2020. GLU variants improve trans- former. arXiv [cs.LG] . Chenyang Song, Xu Han, Zhengyan Zhang, Shengding Hu, Xiyu Shi, Kuai Li, Chen Chen, Zhiyuan Liu, Guangli Li, Tao Yang, and Maosong Sun. 2025. ProSparse: Introducing and enhancing intrinsic acti- vation sparsity within large language models. In Proceedings of the 31st International Conference on Computational Linguistics, COLING 2025, Abu Dhabi, UAE, January 19-24, 2025 , pages 2626–2644. Association for Computational Linguistics. Yixin Song, Zeyu Mi, Haotong Xie, and Haibo Chen. 2024a. Powerinfer: Fast large language model serv- ing with a consumer-grade GPU. In Proceedings of the ACM SIGOPS 30th Symposium on Operating Systems Principles, SOSP 2024, Austin, TX, USA, November 4-6, 2024 , pages 590–606. ACM. Yixin Song, Haotong Xie, Zhengyan Zhang, Bo Wen, Li Ma, Zeyu Mi, and Haibo Chen. 2024b. Turbo sparse: Achieving llm sota performance with mini- mal activated parameters. arXiv preprint arXiv:2406. 05955 . Mingjie Sun, Xinlei Chen, J Zico Kolter, and Zhuang Liu. 2024. Massive activations in large language models. In First Conference on Language Modeling . Philippe Tillet, Hsiang-Tsung Kung, and David D. Cox. 2019. Triton: an intermediate language and com- piler for tiled neural network computations. In Pro- ceedings of the 3rd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, MAPL@PLDI 2019, Phoenix, AZ, USA, June 22, 2019 , pages 10–19. ACM.Lei Wang, Lingxiao Ma, Shijie Cao, Quanlu Zhang, Ji- long Xue, Yining Shi, Ningxin Zheng, Ziming Miao, Fan Yang, Ting Cao, Yuqing Yang, and Mao Yang. 2024. Ladder: Enabling efficient low-precision deep learning computing through hardware-aware tensor transformation. In 18th USENIX Symposium on Op- erating Systems Design and Implementation (OSDI 24), pages 307–323, Santa Clara, CA. USENIX As- sociation. Zhenliang Xue, Yixin Song, Zeyu Mi, Xinrui Zheng, Yubin Xia, and Haibo Chen. 2024. Powerinfer-2: Fast large language model inference on a smartphone. arXiv preprint arXiv:2406.06282 . Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Conference of the Association for Compu- tational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers , pages 4791–4800. Association for Computational Linguis- tics. Zhengyan Zhang, Yixin Song, Guanghui Yu,
https://arxiv.org/abs/2505.17701v1
Xu Han, Yankai Lin, Chaojun Xiao, Chenyang Song, Zhiyuan Liu, Zeyu Mi, and Maosong Sun. 2024. Relu2wins: Discovering efficient activation functions for sparse LLMs. arXiv preprint arXiv:2402. 03804 . 11 A Experimental Details A.1 Hyperparameters Name Low-Rank θA,θB BitLinear θternary Loss Binary Cross Entropy Sparsity ratio ( k) {0.7,0.8,0.9} Learning rate { 1e-3, 5e-4} Training batch size {16,32} Optimizer AdamW Target Binary mask sk alive Epochs {10,20,40,80} Seed 42 Predictor shape Low-Rank Approximator TernaryLinear drank {128,256,512,1024} – Hardware 1 ×NVIDIA A100 80GB Table 5: Hyperparameter settings and additional re- producibility details for training predictors used in D- COUNT DOWN . All experiments were conducted using a single run without multiple random seeds. A.2 Environments All experiments were performed on an NVIDIA A100 80GB GPU. We used Triton v3.1.0 for cus- tom kernel development, while the rest of the exper- imental pipeline was built on HuggingFace Trans- formers v4.51.3, PyTorch v2.5.1, and CUDA v12.1. A.3 Dataset Description Table 6 summarizes the licenses and dataset statis- tics used for evaluation. We evaluate seven Natural Language Under- standing(NLU) tasks and one Natural Language Generation(NLG) task focused on mathematical reasoning (GSM8K). All datasets primarily con- tain English text. Dataset License Train Test ARC-Easy cc-by-sa-4.0 2251 (500) 2376 ARC-Challenge cc-by-sa-4.0 1119 (500) 1172 HellaSwag MIT 39905 (500) 10042 PIQA AFL 3.0 16113 (500) 1838 WinoGrande apache-2.0 40398 (500) 1267 OpenBookQA apache-2.0 4957 (500) 500 TruthfulQA apache-2.0 0 817 GSM8K MIT 0 1319 Table 6: Summary statistics and licenses for datasets used in evaluation. Following previous research (Akhauri et al., 2024), we used subsets of each down- stream task’s training set, each containing 500 examples.B Pseudo Codes B.1 Procedures for Training Predictor Algorithm 1: Training the predictor for a Gated-MLP in D-C OUNT DOWN Input: Training samples {xj}N j=1, Target module GatedMLP , Target sparsity ratio k Output: Trained predictor parameters θ 1foreach training sample xjdo 2 sj←Compute GatedMLP (xj); 3 sj←Binarize using Quantile (k,|sj|); 4ifPredictor is Low-Rank then 5 Initialize parameters θA,θB; 6else if Predictor is TernaryLinear then 7 Initialize parameters θternary ; 8foreach training iteration do 9 Sample mini-batch {xb, sb}; 10 ifPredictor is Low-Rank then 11 ˆsb=xb·θA·θB; 12 else if Predictor is TernaryLinear then 13 ˆsb=xb·θternary ; 14 Compute BCE loss between ˆsbandsb; 15 Update predictor parameters θ; 12 B.2 Kernel in Detail: M-C OUNT DOWN Algorithm 2: M-C OUNT DOWN Inference Kernel (Triton-style) Input: X, W up,ˆτM Output: U,Mask 1# PyTorch ; 2U←X@Wup; 3Mask←(|U| ≥ˆτM); Input: X, U, W gate,Mask,BLK M,BLK N Output: S 4# Triton 1 ; 5start_m ←tl.program_id (0); 6rm←start_m ×BLK M+tl.arange (0,BLK M); 7rn←tl.arange (0,BLK N); 8Mask←Mask +rm ; 9flag←tl.load (Mask)>0; 10Wgate←Wgate+(rm[:,None]×dmodel +rn[None,:]); 11X←X+rn ; 12acc←tl.zeros ((BLK M)); 13i_mask ←flag[:,None]; 14foreach block in rn do 15 w←tl.load (Wgate,mask =i_mask ,other = 0; 16 x←tl.load (X); 17 acc←acc+tl.sum (w×x[None,:],1); 18 Wgate←Wgate+BLK N; 19 X←X+BLK N; 20U←U+rm ; 21u←tl.load (U,mask =flag,other = 0) ; 22acc←silu(acc)×u; 23S←S+rm ; 24tl.store (S,acc,mask =rm < d inter); Input: S, W down,Mask,BLK M,BLK N Output: Y 25# Triton 2 ; 26start_m ←tl.program_id (0); 27start_n ←tl.program_id (1); 28rm←start_m ×BLK M+tl.arange (0,BLK M); 29rn←start_n ×BLK N+tl.arange (0,BLK N); 30Mask←Mask +rm
https://arxiv.org/abs/2505.17701v1
; 31flag←tl.load (Mask)>0; 32Wdown←Wdown+(rm[:,None]×dmodel +rn[None,:]); 33S←S+rm ; 34w←tl.load (Wdown,mask=flag[:,None],other=0) ; 35x←tl.load (S); 36acc←tl.sum (w×x[:,None],0); 37Y←Y+rn ; 38tl.atomic_add (Y,acc);B.3 Kernel in Detail: D-C OUNT DOWN Algorithm 3: D-C OUNT DOWN Inference Kernel (Triton-style) Input: X, θ A, θB, τD Output: Mask 1# PyTorch ; 2ˆs←X@θA@θB; 3Mask←(ˆs≥τD); Input: X, W gate, W up,Mask,BLK M,BLK N Output: S 4# Triton 1 ; 5start_m ←tl.program_id (0); 6rm←start_m ×BLK M+tl.arange (0,BLK M); 7rn←tl.arange (0,BLK N); 8Mask←Mask +rm ; 9flag←tl.load (Mask)>0; 10Wgate←Wgate+(rm[:,None]×dmodel +rn[None,:]); 11Wup←Wup+(rm[:,None]×dmodel +rn[None,:]); 12X←X+rn ; 13gate←tl.zeros ([BLK M]); 14up←tl.zeros ([BLK M]); 15i_mask ←flag[:,None]; 16foreach block in rn do 17 wgate←tl.load (Wgate,mask =i_mask ,other = 0); 18 wup←tl.load (Wup,mask =i_mask ,other = 0) ; 19 x←tl.load (X); 20 gate←gate+tl.sum (wgate×x[None,:],axis= 1); 21 up←up+tl.sum (wup×x[None,:],axis= 1) ; 22 X←X+BLK N; 23 Wgate←Wgate+BLK N; 24 Wup←Wup+BLK N; 25up←up×SiLU(gate); 26tl.store (S,up,mask =rm< M); Input: S, W down,Mask,BLK M,BLK N Output: Y 27# Triton 2 ; 28start_m ←tl.program_id (0); 29start_n ←tl.program_id (1); 30rm←start_m ×BLK M+tl.arange (0,BLK M); 31rn←start_n ×BLK N+tl.arange (0,BLK N); 32Mask←Mask +rm ; 33flag←tl.load (Mask)>0; 34Wdown←Wdown+(rm[:,None]×dmodel +rn[None,:]); 35S←S+rm ; 36w←tl.load (Wdown,mask=flag[:,None],other=0) ; 37x←tl.load (S); 38acc←tl.sum (w×x[:,None],0); 39Y←Y+rn ; 40tl.atomic_add (Y,acc); C Full Results All downstream task results are in Table 11 and Table 12. Chat performance results are shown in Table 7. Kernel Speed results are shown in Fig- ure 5. SparsityRealindicates the actual proportion of indicator elements filtered out during SPPracin- ferences. 13 Method ScenarioLlama-3.1-8B Gemma-2-9B Target Sparsity Target Sparsity 0.70 0.80 0.90 0.70 0.80 0.90 CATSSPIdeal1.02 0.48 0.50 35.41 2.55 0.00 SPPrac(Win) 3.26 0.55 0.72 40.76 6.72 0.00 SPPrac(SparsityReal)70.8 80.0 89.7 68.8 79.3 88.1 DCSPIdeal45.79 39.33 11.85 50.44 48.90 37.79 SPPrac(Win) 1.35 1.57 0.77 6.72 7.99 3.57 SPPrac(SparsityReal)68.8 71.5 80.8 67.0 72.9 83.2 MCSPIdeal46.59 2.74 0.60 47.81 41.83 6.91 SPPrac(Win) 9.68 3.19 0.74 48.78 47.88 28.08 SPPrac(SparsityReal)72.7 82.3 91.0 68.0 77.8 87.6 Method ScenarioQwen2.5-14B Phi-4 Target Sparsity Target Sparsity 0.70 0.80 0.90 0.70 0.80 0.90 CATSSPIdeal21.94 0.38 0.00 42.05 3.45 0.25 SPPrac(Win) 33.62 7.80 0.00 48.87 26.81 0.31 SPPrac(SparsityReal)70.0 80.0 89.1 67.6 78.3 89.7 DCSPIdeal50.10 48.71 32.73 49.10 46.46 37.45 SPPrac(Win) 4.40 0.77 0.20 1.11 0.90 0.12 SPPrac(SparsityReal)66.6 82.8 87.6 65.4 78.5 86.8 MCSPIdeal45.01 36.90 2.83 43.96 35.43 5.28 SPPrac(Win) 48.57 42.51 15.89 46.24 41.62 18.82 SPPrac(SparsityReal)70.0 80.0 90.0 70.1 79.6 89.6 Table 7: Summary of Win Rate on AlpacaEval 2.0 Figure 5: All results for kernel speed. D Theoretical Analysis Details D.1 Notation Notation Explanation dm dmodel di dinter dr drank s int (dinter×k) cact act FLOPs (e.g. SiLU ≈5) Table 8: Notation Used in Theoretical AnalysisD.2 heoretical FLOPs Analysis Method Compute Explanation Dense6dmdi +cactdi +diFull GEMV ×3 Fullσ Full⊙ CATS2dmdi +cactdi + 2di + 2dms +s + 2dmsFull GEMV Wgate Fullσ Apply abs and THLD Sparse GEMV Wup Sparse ⊙ Sparse GEMV Wdown M-C OUNT DOWN2dmdi + 2di + 2dms +cacts +s + 2dmsFull GEMV Wup Apply abs and THLD Sparse GEMV Wgate Sparse σ Sparse ⊙ Sparse GEMV Wdown D-C OUNT DOWN2dmdr + 2drdi +di + 4dms +cacts +s + 2dmsLow-rank GEMV θA Low-rank GEMV θB Apply THLD Sparse GEMV Wgate, Wup Sparse σ Sparse ⊙ Sparse GEMV Wdown Table 9: Comparison of Theoretical
https://arxiv.org/abs/2505.17701v1
FLOPs Across Methods D.3 Theoretical Memory Traffic Analysis Method Mem. R/W Explanation Dense2dmdi + 2dm + 2di +di +di + 2di +di +dmdi +di +dmRead Full Wup, Wgate Read x×2 Write gate, up Read gate Write act_gate Read act_gate, up Write inter Read Full Wdown Read inter Write y CATSdmdi +dm +di +di +di +di +di +di +di +dms +dm +s +di +di +dms +di +dmRead Full Wgate Read x Write gate Read gate Write act_gate Read act_gate Write abs_act_gate Read abs_act_gate Write mask Read Sparse Wup Read x Read Sparse act_gate Read mask Write inter Read Sparse Wdown Read inter Write y 14 Method Mem. R/W Explanation M-Countdowndmdi +dm +di +di +di +di +di +dms +dm +s +di +di +dms +di +dmRead Full Wup Read x Write up Read up Write abs_up Read abs_up Write mask Read Sparse Wgate Read x Read Sparse up Read mask Write inter Read Sparse Wdown Read inter Write y D-Countdowndmdr +dm +dr +drdi +dr +di +di +di + 2dms +dm +di +di +dms +di +dmRead θA Read x Write latent Read θB Read latent Write ˆs Read ˆs Write mask Read Sparse Wup, Wgate Read x Read mask Write inter Read Sparse Wdown Read inter Write y Table 10: Comparison of Theoretical Memory Traffic Across Methods 15 Sparsity Method ARC-C TFQA HS ARC-E PIQA WG OBQA GSM8K Llama-3.1-8B-Instruct 0.0 DENSE 0.520 0.367 0.590 0.819 0.800 0.737 0.336 0.760 0.7DEJA VU 0.292 0.229 0.272 0.445 0.553 0.503 0.218 0.000 CATS 0.453 0.343 0.523 0.754 0.739 0.653 0.298 0.003 M-COUNTDOWN 0.493 0.372 0.568 0.784 0.776 0.695 0.330 0.544 D-COUNTDOWN 0.509 0.370 0.592 0.812 0.795 0.727 0.332 0.688 0.8DEJA VU 0.282 0.231 0.273 0.440 0.557 0.511 0.228 0.000 CATS 0.358 0.326 0.428 0.651 0.676 0.582 0.278 0.000 M-COUNTDOWN 0.458 0.343 0.534 0.759 0.748 0.661 0.314 0.288 D-COUNTDOWN 0.502 0.356 0.585 0.809 0.789 0.713 0.334 0.605 0.9DEJA VU 0.296 0.230 0.273 0.455 0.557 0.530 0.236 0.000 CATS 0.293 0.252 0.303 0.495 0.574 0.534 0.242 0.000 M-COUNTDOWN 0.411 0.304 0.430 0.649 0.676 0.613 0.286 0.001 D-COUNTDOWN 0.484 0.330 0.548 0.776 0.755 0.680 0.312 0.313 Qwen2.5-14B-Instruct 0.0 DENSE 0.608 0.517 0.657 0.861 0.817 0.758 0.364 0.807 0.7DEJA VU 0.336 0.318 0.365 0.612 0.616 0.533 0.254 0.000 CATS 0.488 0.443 0.585 0.777 0.729 0.629 0.318 0.043 M-COUNTDOWN 0.573 0.488 0.638 0.829 0.792 0.704 0.352 0.776 D-COUNTDOWN 0.588 0.518 0.654 0.850 0.801 0.736 0.364 0.770 0.8DEJA VU 0.340 0.322 0.357 0.609 0.619 0.554 0.258 0.000 CATS 0.410 0.371 0.472 0.683 0.632 0.568 0.284 0.000 M-COUNTDOWN 0.532 0.476 0.614 0.813 0.743 0.670 0.352 0.681 D-COUNTDOWN 0.579 0.488 0.644 0.837 0.799 0.716 0.360 0.751 0.9DEJA VU 0.358 0.333 0.369 0.612 0.621 0.531 0.256 0.000 CATS 0.356 0.327 0.385 0.619 0.621 0.547 0.260 0.000 M-COUNTDOWN 0.468 0.421 0.525 0.736 0.686 0.589 0.304 0.100 D-COUNTDOWN 0.512 0.436 0.607 0.801 0.756 0.648 0.312 0.371 gemma-2-9b-it 0.0 DENSE 0.632 0.433 0.597 0.856 0.812 0.761 0.404 0.663 0.7DEJA VU 0.339 0.246 0.300 0.596 0.590 0.532 0.276 0.000 CATS 0.575 0.412 0.559 0.840 0.755 0.680 0.348 0.565 M-COUNTDOWN 0.605 0.421 0.592 0.849 0.793 0.726 0.374 0.632 D-COUNTDOWN 0.626 0.417 0.600 0.854
https://arxiv.org/abs/2505.17701v1
0.800 0.750 0.384 0.649 0.8DEJA VU 0.346 0.246 0.296 0.599 0.581 0.548 0.262 0.000 CATS 0.490 0.366 0.486 0.788 0.696 0.604 0.328 0.105 M-COUNTDOWN 0.583 0.408 0.582 0.842 0.767 0.707 0.360 0.610 D-COUNTDOWN 0.604 0.421 0.599 0.851 0.796 0.728 0.374 0.624 0.9DEJA VU 0.356 0.246 0.303 0.616 0.573 0.523 0.264 0.000 CATS 0.364 0.242 0.310 0.617 0.589 0.537 0.278 0.000 M-COUNTDOWN 0.534 0.383 0.517 0.799 0.727 0.648 0.344 0.438 D-COUNTDOWN 0.578 0.410 0.572 0.833 0.777 0.676 0.352 0.524 phi-4 0.0 DENSE 0.558 0.404 0.632 0.814 0.808 0.766 0.338 0.923 0.7DEJA VU 0.387 0.311 0.348 0.655 0.626 0.587 0.266 0.000 CATS 0.536 0.400 0.595 0.794 0.791 0.696 0.304 0.807 M-COUNTDOWN 0.533 0.384 0.616 0.800 0.796 0.733 0.334 0.888 D-COUNTDOWN 0.554 0.411 0.630 0.809 0.807 0.752 0.332 0.916 0.8DEJA VU 0.409 0.333 0.354 0.655 0.632 0.585 0.270 0.000 CATS 0.516 0.397 0.539 0.771 0.760 0.644 0.298 0.351 M-COUNTDOWN 0.503 0.386 0.594 0.792 0.778 0.715 0.330 0.767 D-COUNTDOWN 0.552 0.408 0.622 0.807 0.810 0.755 0.340 0.898 0.9DEJA VU 0.392 0.317 0.357 0.640 0.630 0.566 0.266 0.000 CATS 0.426 0.356 0.414 0.676 0.672 0.591 0.280 0.000 M-COUNTDOWN 0.479 0.370 0.524 0.759 0.728 0.654 0.296 0.287 D-COUNTDOWN 0.529 0.399 0.601 0.798 0.789 0.695 0.318 0.827 Table 11: SPIdealscores compared to Dense across all downstream tasks. Dense scores are in bold, as well as the highest score for each task within each sparsity level. 16 Sparsity Method SparsityRealARC-C TFQA HS ARC-E PIQA WG OBQA GSM8K Llama-3.1-8B-Instruct 0.7CATS 0.684 0.461 0.355 0.549 0.778 0.764 0.683 0.316 0.127 M-COUNTDOWN 0.709 0.484 0.375 0.574 0.788 0.778 0.708 0.310 0.547 D-COUNTDOWN 0.705 0.422 0.318 0.373 0.748 0.714 0.663 0.298 0.002 0.8CATS 0.784 0.420 0.322 0.495 0.718 0.721 0.624 0.296 0.000 M-COUNTDOWN 0.806 0.460 0.361 0.549 0.770 0.757 0.680 0.322 0.322 D-COUNTDOWN 0.739 0.382 0.306 0.388 0.688 0.673 0.621 0.292 0.003 0.9CATS 0.902 0.299 0.273 0.323 0.521 0.607 0.537 0.238 0.000 M-COUNTDOWN 0.895 0.416 0.321 0.471 0.711 0.721 0.620 0.304 0.009 D-COUNTDOWN 0.843 0.349 0.285 0.345 0.628 0.635 0.593 0.260 0.000 Qwen2.5-14B-Instruct 0.7CATS 0.698 0.518 0.460 0.612 0.805 0.761 0.660 0.336 0.293 M-COUNTDOWN 0.719 0.590 0.509 0.640 0.838 0.792 0.712 0.358 0.767 D-COUNTDOWN 0.678 0.513 0.426 0.536 0.798 0.748 0.668 0.322 0.197 0.8CATS 0.802 0.472 0.421 0.551 0.754 0.712 0.627 0.284 0.000 M-COUNTDOWN 0.804 0.553 0.492 0.625 0.826 0.769 0.669 0.354 0.704 D-COUNTDOWN 0.827 0.454 0.394 0.468 0.740 0.693 0.615 0.292 0.000 0.9CATS 0.906 0.347 0.350 0.393 0.631 0.616 0.527 0.258 0.000 M-COUNTDOWN 0.889 0.492 0.450 0.580 0.794 0.727 0.632 0.320 0.287 D-COUNTDOWN 0.893 0.434 0.384 0.429 0.689 0.669 0.605 0.282 0.000 gemma-2-9b-it 0.7CATS 0.695 0.580 0.427 0.567 0.843 0.770 0.693 0.368 0.593 M-COUNTDOWN 0.685 0.608 0.431 0.598 0.854 0.801 0.745 0.386 0.633 D-COUNTDOWN 0.689 0.567 0.403 0.493 0.821 0.751 0.702 0.364 0.340 0.8CATS 0.806 0.542 0.392 0.501 0.811 0.729 0.615 0.346 0.083 M-COUNTDOWN 0.779 0.596 0.412 0.589 0.847 0.788 0.712 0.370 0.618 D-COUNTDOWN 0.755 0.564 0.401 0.506 0.819 0.758 0.702 0.374 0.381 0.9CATS 0.911 0.340 0.258 0.306 0.617 0.586 0.514 0.262 0.000 M-COUNTDOWN 0.875 0.573 0.395 0.554 0.829 0.761 0.686 0.360 0.544 D-COUNTDOWN 0.853 0.529 0.383 0.492 0.806 0.747 0.665 0.354 0.187
https://arxiv.org/abs/2505.17701v1
phi-4 0.7CATS 0.675 0.539 0.417 0.613 0.801 0.795 0.724 0.322 0.856 M-COUNTDOWN 0.707 0.540 0.393 0.620 0.804 0.796 0.736 0.332 0.894 D-COUNTDOWN 0.687 0.471 0.368 0.485 0.750 0.733 0.685 0.294 0.208 0.8CATS 0.771 0.525 0.390 0.587 0.795 0.786 0.673 0.300 0.675 M-COUNTDOWN 0.799 0.527 0.381 0.607 0.793 0.784 0.715 0.334 0.817 D-COUNTDOWN 0.815 0.418 0.343 0.438 0.707 0.692 0.657 0.268 0.036 0.9CATS 0.889 0.458 0.360 0.460 0.713 0.692 0.605 0.294 0.000 M-COUNTDOWN 0.894 0.498 0.386 0.563 0.777 0.750 0.698 0.316 0.450 D-COUNTDOWN 0.895 0.408 0.294 0.404 0.674 0.668 0.616 0.270 0.000 Table 12: SPPracscores compared across all downstream tasks. Bold indicates the highest score at each sparsity level for each task. 17
https://arxiv.org/abs/2505.17701v1
arXiv:2505.17704v1 [cs.CL] 23 May 2025SemSketches-2021: experimenting with the machine processing of the pilot semantic sketches corpus Maria Ponomareva♠,♡Maria Petrova♠Julia Detkova♠Oleg Serikov♡,♢Maria Yarova♣ ♠ABBYY , Moscow, Russia ♡National Research University Higher School of Economics, Moscow, Russia ♣Moscow Institute of Physics and Technology, Moscow, Russia ♢Deeppavlov MIPT, Moscow, Russia Abstract The paper deals with elaborating different approaches to the machine processing of semantic sketches. It presents the pilot open corpus of semantic sketches. Different aspects of creating the sketches are discussed, as well as the tasks that the sketches can help to solve. Special attention is paid to the creation of the machine processing tools for the corpus. For this purpose, the SemSketches-2021 Shared Task was organized. The participants were given the anonymous sketches and a set of contexts containing the necessary predicates. During the Task, one had to assign the proper contexts to the corresponding sketches. Key words: word sketches, semantic sketches, frame semantics, semantic role labeling, corpus lexicography SemSketches-2021: опыт автоматической обработки пилотного корпуса семантических скетчей Мария Пономарева Мария Петрова Юлия Деткова Олег Сериков Мария Ярова ABBYY, ВШЭ ABBYY ABBYY ВШЭ, Deeppavlov МФТИ Москва Москва Москва Москва Москва Аннотация Статья посвящена различным подходам к автоматической обработке семантических скетчей. В статье представлен первый открытый корпус семантических скетчей для русского языка. На примере данного корпуса рассматриваются особенности семантических скетчей и проблемы, воз- никающие при их построении, обсуждаются задачи, которые могут решаться с привлечением скетчей, а также дальнейшие перспективы использования скетчей. Особое внимание уделяется возможности создания инструментов автоматической обработки корпуса. В качестве эксперимен- та по созданию подобных инструментов авторами проведено соревнование SemSketches-2021, в рамках которого участникам предлагалась задача по работе с корпусом скетчей, где требовалось соотнести анонимизированные скетчи с рядом контекстов для соответствующих предикатов. Ключевые слова: скетчи слов, семантические скетчи, семантика фреймов, разметка семанти- ческих ролей, корпусная лексикография 1 Introduction The current paper continues the work on the semantic sketches which were first presented at the Dialogue-2020 conference. The idea of the semantic sketch was introduced in [7]. The semantic sketch is a special representation of a word’s compatibility where all semantic links of the word are grouped according to their semantic relations with the core they depend on. All possible semantic dependencies are statistically ranged: first, the frequency of the collocation between the parent and the child is taken into account; second, the frequency of the semantic role for the given core (for instance, the frequency of the Agent, Locative, Object, or Time). The most frequent collocations form the semantic sketch of the word. In [7], the authors focused on the creation of the semantic sketches and on testing the semantic mark-up used for the sketches. Namely, they measured the correctness of the predicate’s choice in a set of sentences and the choice of the proper semantic roles for the predicates’ dependencies. In the present work, the focus has been made on building the pilot corpus of the semantic sketches themselves, the SemSketches corpus . The corpus is aimed at achieving several purposes: 1. to evaluate how representative the sketches are, 2. to elaborate some tools for processing the sketches, 3. to specify what
https://arxiv.org/abs/2505.17704v1
kind of tasks the semantic sketches can help to solve, as our further plan is to integrate the sketches into the General Internet-Corpus of Russian (GICR, [4], [3]), 4. to analyze what kind of mistakes we happen to face while creating the sketches. The idea to represent a word’s meaning in the form of the semantic sketch is closely related to the main idea of distributional semantics according to which the meaning of the word can be represented through its lexical co-occurrence. The famous formula for the idea given in [10] says: “You shall know a word by the company it keeps”. Over the past few years, vector representations have become a traditional method of representing the word’s semantics. The static embeddings such as word2vec [8] and FastText [9] as well as the dynamic embeddings that followed, such as ELMo [5], ULMFit [13], and BERT [2], have completely changed the NLP field. However, quality evaluations of the vector representations pose a challenge, as their serious drawback is that one can neither assess nor interpret them directly. Whereas the vector is a numeric meaning representation, appropriate for computers, the semantic sketch can be considered its human-interpretable counterpart. As an experiment on processing the sketches automatically, we have introduced the SemSketches Shared Task . One of its goals is to connect these two methods of semantic representation. The Shared Task suggested the following problem. Participants were given the corpus of the semantic sketches with the core predicates unknown, that is, the semantic roles of the dependencies and the word- fillers of the roles were given, but not the predicates they were attached to. We have presented a set of such anonymous sketches and a list of contexts containing the predicates. The task was to create a tool that assigns the sketch to the corresponding contexts. For most sketches, the task did not seem difficult for a human, as some of the examples will demon- strate below, but it turned out to be rather complicated for the computer, as the results of the competition showed. The corpus and the Shared Task results are available at the SemSketches github1. 2 What is a semantic sketch There is no need to underline the importance of using text corpora for various purposes nowadays. The size of the corpora is growing quickly. On the one hand, it gives the users more opportunities and allows one to receive more representative data. On the other hand, with a bigger corpus, more sophisticated tools are demanded to process the results of the search queries. One of the methods to describe the word’s compatibility is to present it in the form of a syntactic sketch [22]. The syntactic sketch is a lexicographical profile of a word, where word dependencies are classified by their grammatical roles and ranged by the statistics of their compatibility with the core. The syntactic sketches were first introduced in the Sketch Engine project2and over the past years have become widely used in lexicography, language teaching, multilingual corpora creation, various translation resources, and in a number of other areas. The
https://arxiv.org/abs/2505.17704v1
evident advantage of the syntactic sketch is its vividness: it shows simultaneously all of the most frequent syntactic dependencies of a word and arranges them in a table according to the roles. At the same time, the syntactic sketches have one strong limitation: the grammatical information they are based on does not allow one to take lexical homonymy into account, which complicates the interpretation of the obtained results. In order to solve this problem, an attempt was made to create the semantic sketches [7], where the representation of a word’s compatibility is supplemented with semantic relations between words (each 1https://github.com/dialogue-evaluation/SemSketches 2www.sketchengine.eu relation is marked not only with a syntactic, but with a semantic role as well) and semantic classes of words (which mark the specific semantic meaning of a word in a context). Therefore, the semantic sketch is understood as a generalized lexicographic portrait of a word, which includes the most frequent semantic dependencies of the verb. In other words, it is a way of representing the compatibility of words, where the description of each word includes a set of its most frequent seman- tic dependencies classified according to their semantic roles. For each role a number of relevant “fillers” (words and phrases) are given, and the fillers are ranked according to the frequency of their compatibility with the core. Each sketch illustrates a word with a certain meaning. The semantic sketches are built with the help of the Compreno parser [24]. Unlike other parsers, the Compreno suggests full semantic mark-up, namely, it deals not only with the actant semantic dependen- cies of the predicates, but with the adjuncts, modifiers, and other dependencies as well [18]. It makes the sketches an important tool for dealing with the semantic role labeling (SRL) problem which has attracted many researchers recently. Despite high interest in the problem ([12], [11], [17], [15], [6], [16], [25]), until the current moment no research among the SRL papers has been presented (or, at least, we have not seen any), where all semantic roles are taken into account. Most works focus on the actant dependencies only, such as Agent, Object, or Experiencer. In the meantime, for many predicates, circumstantial dependencies are enough frequent and significant to get into the predicate’s sketch together with its actants, and, moreover, in some cases, help to identify the core even better than the actants do. The sketches are illustrated in the two examples below, the first one — for the verb «страдать :SUFFERING_AND_TORMENT» ‘to suffer’ (Figure 1) and the second one — for the verb «готовить:TO_PREPARE_FOOD_SUBSTANCE» ‘to prepare food, to cook’ (Figure 2): Figure 1: the sketch for the verb «страдать:SUFFERING_AND_TORMENT» (‘to suffer’). Here the elements of the sketch are given with their rough translations. The participants of the Shared Task got the same representations, but did not get the titles of the sketches. However, as the pictures above demonstrate, it does not seem difficult for a human to guess the proper predicates for the sketches, which allows us to regard the sketches as representative illustrations for the verb’s compatibility. Figure 2: the sketch
https://arxiv.org/abs/2505.17704v1
for the verb «готовить:TO_PREPARE_FOOD_SUBSTANCE» (‘to prepare food, to cook’). Here the elements of the sketch are given with their rough translations. 3 The SemSketches Shared Task To explore the semantic sketches as far as their quality and representativeness are concerned, we have created the pilot corpus of Russian semantic sketches and made it the basis for the SemSketches Shared Task. The problem was formulated as follows: given a set of anonymized sketches and a set of contexts for different predicates, one should match each predicate in its context to a relevant sketch. The second part of the competition data is the set of the contexts given for different predicates. In the case of ambiguous predicates, the WSD problem can be stated. 3.1 Data preparation Sketches The sketches were built on the texts from the Magazine Hall of the GICR. Although the parser gives us the full semantic mark-up, we have implemented some restrictions for the present research. As in [7], the authors have taken only verbal cores and their subtrees: all verbs are marked with semantic classes (denoting their meanings) and the semantic roles for their direct depend- encies. We did not mark the dependencies of the non-vebal cores, the dependencies of the ellipted verbs and the ellipted groups themselves, as well as the syntactically moved groups. In addition, we have introduced some additional restrictions for the purpose of the current competition, namely, we have excluded pronouns and personal nouns, as they complicate the work with the anonymized sketches. For the current corpus, we have chosen only verbs which have at least two meanings, as it makes the task of defining proper sketches more interesting, on the one hand, and, on the other hand, contributes to solving the WSD problem. It means that each verb chosen entered at least two semantic classes. The number of such verbs for the Russian language turned out to be more than ten thousand. Then we chose a subset of the list through selecting the verbs by the following principles. At the beginning, we have ranged the sample so that the verbs with the most frequent meanings came first: for instance, the verb рубить meaning TO_HACK (рубить дерево — ‘to hack a tree’) is sufficiently frequent, while the same verb meaning TO KNOW ABOUT (рубить в математике — ‘to understand mathematics well’) is rather marginal and has thus been positioned at the end of our list. The frequency of different meanings has been obtained with the help of the Compreno parser. Next, we have collected the verbs’ sketches taking into account the number of the relations the verb has in the corpus. Namely, we have collected all the semantic dependencies for each meaning of each verb in our marked-up corpus, and if the number of the dependent nodes exceeded the threshold of 2000, the predicate in the certain value was selected for inclusion in the final set. During this procedure, all dependencies were taken into account — both different and repeated, in order not to lose any frequent predicates with limited lexical compatibility. At the same time,
https://arxiv.org/abs/2505.17704v1
the threshold was rather high to preserve the quality of the sketches. At last, the final number of sketches in the pilot corpus became 915. Due to the exclusion of rare meanings, some verbs kept only one meaning in the sample, that is, the terminal verb list contained both polysemantic verbs with several meanings in the sample and polysemantic verbs which entered in our sample only in one (the most frequent) meaning. The next step was to analyze the correctness of the sketches, namely, to check whether the semantic dependencies and the fillers of the dependencies that got into the sketch really refer to the verb in the given meaning. The errors check was performed for a subsample of the corpus which formed the manual Dev data (see below). Most errors refer to situations where the more frequent homonym influences the less frequent one. For instance, the verb писать meaning ‘to paint’ ( писать портрет с кого-л. — ‘to paint smb.’s picture’ ) is less frequent than писать meaning ‘to write’ ( писать письмо — ‘to write a letter’), so the sketch for the писать — ‘to paint’ contains some incorrect examples in the Object dependency — such as ‘to write letters’. The reason is that when building the semantic structures for the sentences the sketch is based on, the structure with the incorrect but more frequent homonym gets a higher evaluation due to the high statistics of the more frequent verb. Another error can be illustrated with the sketch «готовить:TO_PREPARE_MEDICINE _OR_FOOD» ‘to cook’. It contains combinations like готовить резервную копию — ‘to prepare a reserve copy’. Here the problem is that the compatibility of ‘copy’ with the verbs depends not on the ‘copy’ itself but on the semantics of the noun following it, that is, ‘the copy of the cake’ is also possible. As an instance of the sketch with the incorrect semantic dependency, let us take the sketch «выхо- дить:идти:TO_WALK» ‘go out’ on the Figure 3. The sketch contains the Agent Metaphoric slot which must be definitely referred to another meaning, and the Purpose Goal slot contains the incorrect filler на связь (выйти на связь means ‘to get in touch’, and here another homonym of the verb выйти is supposed to be): The main reasons for the mistakes in the sketches are the incorrect influence of the statistics, certain inaccuracies of the semantic models in the parser, and the impossibility of distinguishing between the homonyms due to the closeness of their meanings or lack of distinguishing context in the sentences. Contexts Every meaning from the chosen set is illustrated with contexts. A context is a sentence with one target predicate highlighted. No additional mark-up is presented. Each meaning corresponds to several dozens of contexts with the target words having this meaning. The contexts were collected from news, fiction, publicistic texts, being close by genre to those presented in Magazine Hall. It is important that the contexts do not overlap with the corpus which the sketches were built on. The excerpt from the contexts is given in Table 1. Figure 3:
https://arxiv.org/abs/2505.17704v1
the semantic sketch for the verb «выходить:идти:TO_WALK» ‘to go out’). Here the elements of the sketch are given with their rough translations. ID dev.sent.rus.116 target наполнились start 46 end 57 context Когда доктор вошел, она вспыхнула, и глаза ее наполнились слезами Table 1: The example of the context. The position of the target word наполнились ‘filled’ in the context ‘When the doctor came in, she flushed, and her eyes filled with tears’ is defined by the offsets. Datasets The task was meant to be solved in a few-shot or unsupervised learning manner. During the Shared Task, we provided the participants with two sets of data. In the first phase, the Trial data was published. It comprises three parts: a set of sketches, a set of contexts, and mapping between these two sets. The participants could use the data to get familiar with the formats, to test their hypotheses and to fine-tune their systems. During the second phase, we provided the participants with the main set of the sketches and corresponding contexts, which will be referred to as Dev data. In contrast to trial data, where the mapping had been given, for Dev data the participants were asked to find the relations between the sketches and the contexts themselves. For the third phase, we manually selected 100 sketches and evaluated the corresponding contexts. This data formed the gold standard set for the task, which we will refer to as Manual Dev data. Table 2 shows the size of the obtained datasets. During the second phase, the participants were able to commit their answers to the CodaLab3to know the results on the Dev data and to choose the best decision. During the third phase, the performance of the best variants was finally evaluated on the Manual Dev data. After the announcement of the results, we published the answers (the mapping between the sketches and the contexts) on the SemSketches github. 3https://competitions.codalab.org/competitions/29992 split number of sketches number of contexts Trial 20 2000 Dev 895 44750 Manual Dev 100 4347 Table 2: The size of the SemSketches datasets, Manual Dev data forms a subset of Devdata 3.2 Evaluation metric The submitted systems were evaluated using the accuracy metric. For the Shared Task, accuracy was calculated as the number of matched pairs between the participants’ answers and test markup divided by the total number of contexts. The evaluation script is publicly available on the SemSketches github. 3.3 Baseline The participants were provided with a weak baseline solution. The solution was based on the masked language modeling (MLM) mechanism of the RuBERT [14] model. For a given context cont,sketch was chosen according to the computed sketch scores based on MLM candidates. MLM candidates ( MLMNcont) were calculated as follows: 1. syntactic analysis using the UDPipe ([23]) has been performed to find the direct dependents of the target predicate; 2. for each of the direct dependents, top- Nmask replacements RepN depwere stored; 3. stored replacements were intersected, i.e. MLMNcont=⋂︀{Rep𝑁 𝑑𝑒𝑝∀dep∈cont}; 4. sketch Score was computed as the number of tokens present in the intersection of the sketch repres- entation
https://arxiv.org/abs/2505.17704v1
and the stored MLM candidates. Score (sketch ,cont ) =⃒⃒⃒MLM1000cont∩Tokenssketch⃒⃒⃒ The intersection was performed over lemmas thus treating на заре and заря as intersecting entries. The weak baseline system has shown 0,0094 accuracy on the Dev data set thus overperforming the random baseline. 3.4 Submitted systems Three teams participated in the Shared Task: paleksandrova ,good501 ,smpl . All teams sugges- ted the solutions based on different approaches, and each solution managed to overcome the baseline. However, the final scores of each team turned out to be rather modest. To compare the results achieved, see Table 3 where the score of each team and the baseline score are presented. Team Dev Score Manual Dev Score paleksandrova 0.309 0.277 good501 0.104 0.127 smpl 0.182 0.121 baseline 0.0094 0.0035 Table 3: SemSketches Shared Task: the results of the submitted and baseline systems. Let us now shortly characterize each decision and analyze what core problems they faced. The team smpl used the brute-force approach. LM score has been used to rank sketches and choose the best one for each context. To estimate how well the predicate pred fits into the given sketch sketch , theLM score was used. LM score is the average probability of 𝑝𝑟𝑒𝑑 to replace [MASK] token in template sentence ‘ [MASK] cell ’. Template sentences were generated for each cellpresent in sketch . The team Good501 used the approach based on the sentences’ similarity objective, which is a popular objective when training language models. Target predicate was highlighted in the sentence using special tags. Sketch tables were flattened into pseudo-sentences. For the given sentence, the most similar sketch was chosen by using the Sentence-BERT[21] siamese similarity mechanism. The team paleksandrova [1] used the MLM approach, which consisted of first restoring the covered predicate for each of the given sketches, and then picking the relevant sketch for the target sentence. The covered predicates were restored by generating templates (e.g. «[MASK] в школу» — ‘[MASK] to school’) using the sketch content cells. The most frequent predicate of all the MLM hypo- theses for the sketch’s templates was treated as the re-covered predicate. The first sketch whose predicate matched the sentence predicate was used as the system answer. When no sketch was found by exact matching, the sketch whose restored predicate was word2vec-closest [8] to the sentence predicate was used as the answer. 3.5 The analysis of the submitted systems During the Shared Task, we formulated the experimental problem leaving enough room for different approaches. Although the performance of the submitted systems may be improved significantly, the proposed ideas were encouragingly diverse and thought provoking. The common feature of all three systems is using the pretrained Language Models. The team 501good which adopted the approach from Sentence Transformers introduced the only system that included training. The model was trained on the Trial data (20 sketches). The systems of smpl andpaleksandrova defined their unsupervised strategies for mapping the sketches and the contexts. While the smpl team estimated how well each target predicate fits to each sketch using the score from the Masked Language Model, the paleksandrova
https://arxiv.org/abs/2505.17704v1
team suggested an original approach imitating the way humans guess the core of the anonymous sketch. It is worth mentioning that the approaches of paleksandrova andsmpl by design cannot disam- biguate the polysemous predicate, as they take only the target verb into account but not its context. The team smpl approach could be thought of as scoring how well the sketch could account as the sentence predicate core. LM is trained on sentence-level objective, therefore, the successful application of the similarity approach demands more sophisticated preprocessing of the input sequence, for example, taking the predicate contexts into account. Such modification could improve the results. The team paleksandrova approach seems to be the most promising one. But the accuracy turned out to be rather low for the following reason. The sketch accumulates several verb forms, namely, it includes all tense, aspect and voice forms. For instance, the verbs строить ‘build’ Imperfective, NonReflexive ,построить ‘build’ Perfective, NonReflexive ,строиться ‘build’ Imperfective, Reflexive , построиться ‘build’ Perfective, Reflexive refer to one sketch. As far as paleksandrova approach is concerned, the team regarded such verbs as different candidates for a sketch. At the same time, they chose only one top candidate for each sketch. Therefore, only one grammatical form of the necessary set could be referred to the right sketch. 4 Discussion In the current paper, we demonstrated the pilot corpus of the semantic sketches, gave a brief analysis of the problems we faced during the corpus creation, and described the results of the SemSketches Shared Task aimed at applying the machine processing tools to the corpus. The sketches are based on the parser with full semantic mark-up, which defines their value and unique- ness: first, the sketches allow one to analyze not only the actant dependencies, but a full semantic model of a word; second, they differentiate between the various meanings of the verbs. As far as the opportunities for theoretical investigations are concerned, the sketches can help in dealing with all problems bound with the semantic compatibility of words. Especially, the SRL and the WSD problems must be mentioned here. As noted above, most researchers focus mainly only on the actant roles, while other dependencies do not usually get much attention. The semantic sketches suggest interesting data in this respect. The sketches include most frequent collocations, that is, the most natural, most typical contexts of a word. Among the dependencies the sketches include, modifiers and adjuncts are quite frequent. For some verbs, they seem to be even more specific than the actants and give more help in identifying the predicate. For instance, the Locative is a typical circumstantial adjunct, but it is an obligatory slot for the verbs with the position meaning such as быть ‘be’, находиться ‘be situated’. The Locative slot helps to differentiate between the ‘be’ with the position meaning and other be-homonyms, while the semantic role corresponding syntactically to the Subject of ‘be’ does not really contribute to differentiating between the be-homonyms. It seems that the meaning of the adjuncts and the modifiers is sometimes underevaluated, therefore, an interesting task is to
https://arxiv.org/abs/2505.17704v1
evaluate the correlation between the actant and circumstantial dependencies in the sketches. As for the applied tasks, one of the promising directions in using the semantic sketches is their im- plementation for probing tasks for the pretrained language models. The interpretation of the linguistic knowledge encoded in the pretrained models has attracted much attention recently ([26], [19], [20]). We believe that the semantic sketches can serve as a basis for both probing tasks and linguistically-motivated fine-tune tasks for such models. To summarize, the ideas from the proposed approaches can be used to embed effectively semantic sketches, making them not only a tool for manual lexicographical work but a semantic representation valid for automatic methods of Natural Language Processing. 5 Further plans Our next plan is to add the sketches into the GICR, which brings two problems to consider. The first one deals with the errors evaluation: in the current work, we did not check all the sketches in the pilot corpus manually — only the manual Dev data. Therefore, we did not evaluate the total number of the mistakes in the whole corpus. This task is still to be done, including work on both, that is, sketches that seem to be unsuitable (checking the manual Dev data shows that such cases are rare) and sketches containing single mistakes in either the semantic dependencies or their fillers. The second question is about the processing tools the sketches should be provided with. The SemS- ketches Shared Task demonstrated that machine tools can be successfully applied to the sketches pro- cessing (in spite of the fact that the precision of the solutions suggested by the applicants was not really high). What the tools should look like, depends significantly on the tasks the sketches will be used to solve. At the same time, we have recently started work on the English sketches, so our further plans include adding other languages to the sketch model, starting with the English sketches. References [1] Aleksandrova Polina, Mokhova Anna, and Nikolaenkova Maria. Matching semantic sketches to predicates in context using the BERT model // Proc Dialogue, Russian International Conference on Computational Linguistics. — Moscow. — 2021. [2] Devlin Jacob, Chang Ming-Wei, Lee Kenton, and Toutanova Kristina. BERT: Pre-training of Deep Bidi- rectional Transformers for Language Understanding // Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, V olume 1 (Long and Short Papers). — Minneapolis, Minnesota : Association for Com- putational Linguistics. — 2019. — June. — P. 4171–4186. — Access mode: https://www. aclweb.org/anthology/N19-1423 . [3] Piperski Alexander, Belikov Vladimir, Kopylov Nikolay, Selegey Vladimir, and Sharoff Serge. Big and diverse is beautiful: A large corpus of Russian to study linguistic variation // Proc 8th Web as Corpus Workshop (WAC-8). — 2013. [4] Belikov Vladimir, Kopylov Nikolay, Piperski Alexander, Selegey Vladimir, and Sharoff Serge. Corpus as language: from scalability to register variation // Proc Dialogue, Russian International Conference on Computational Linguistics. — Bekasovo. — 2013. [5] Peters Matthew, Neumann Mark, Iyyer Mohit, Gardner Matt, Clark Christopher, Lee Kenton, and Zettlemoyer
https://arxiv.org/abs/2505.17704v1
Luke. Deep Contextualized Word Representations // Proceedings of the 2018 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, V olume 1 (Long Papers). — New Orleans, Louisiana : Associa- tion for Computational Linguistics. — 2018. — June. — P. 2227–2237. — Access mode: https: //www.aclweb.org/anthology/N18-1202 . [6] Tan Zhixing, Wang Mingxuan, Xie Jun, Chen Yidong, and Shi Xiaodong. Deep Semantic Role Labeling With Self-Attention // Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018 / ed. by McIlraith Sheila A. and Weinberger Kil- ian Q. — AAAI Press. — 2018. — P. 4929–4936. — Access mode: https://www.aaai.org/ ocs/index.php/AAAI/AAAI18/paper/view/16725 . [7] Detkova Julia, Novitskiy Valeriy, Petrova Maria, and Selegey Vladimir. Differential Semantic Sketches For Russian Internet-Corpora // Proc Dialogue, Russian International Conference on Com- putational Linguistics. — Moscow. — 2020. [8] Mikolov Tomas, Sutskever Ilya, Chen Kai, Corrado Greg, and Dean Jeffrey. Distributed Representations of Words and Phrases and their Compositionality // Neural and Information Processing System (NIPS). — 2013. — Access mode: https://papers.nips.cc/paper/ 5021-distributed-representations-of-words-and-phrases-and-their-compositionality. pdf. [9] Bojanowski Piotr, Grave Edouard, Joulin Armand, and Mikolov Tomas. Enriching Word Vectors with Subword Information // Transactions of the Association for Computational Linguistics. — 2017. — V ol. 5. — P. 135–146. [10] Firth J. A Synopsis of Linguistic Theory 1930-1955 // Studies in Linguistic Analysis. — Philo- logical Society, Oxford, 1957. — reprinted in Palmer, F. (ed. 1968) Selected Papers of J. R. Firth, Longman, Harlow. [11] Koomen Peter, Punyakanok Vasin, Roth Dan, and Yih Wen-tau. Generalized Inference with Mul- tiple Semantic Role Labeling Systems // Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005). — Ann Arbor, Michigan : Association for Computa- tional Linguistics. — 2005. — June. — P. 181–184. — Access mode: https://www.aclweb. org/anthology/W05-0625 . [12] Gildea Daniel and Jurafsky Daniel. Automatic labeling of semantic roles // Computational Linguis- tics. — 2002. — V ol. 28, no. 3. — P. 245–288. [13] Howard Jeremy and Ruder Sebastian. Fine-tuned Language Models for Text Classification // CoRR. — 2018. — V ol. abs/1801.06146. — 1801.06146 . [14] Kuratov Yuri and Arkhipov Mikhail. Adaptation of Deep Bidirectional Multilingual Transformers for Russian Language. — 2019. — 1905.07213. [15] Lang Joel and Lapata Mirella. Unsupervised Semantic Role Induction with Graph Partitioning // Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. — Ed- inburgh, Scotland, UK. : Association for Computational Linguistics. — 2011. — July. — P. 1320– 1331. — Access mode: https://www.aclweb.org/anthology/D11-1122 . [16] Cheng Jianpeng, Reddy Siva, Saraswat Vijay, and Lapata Mirella. Learning Structured Natural Lan- guage Representations for Semantic Parsing // Proceedings of the 55th Annual Meeting of the Associ- ation for Computational Linguistics (V olume 1: Long Papers). — Vancouver, Canada : Asso- ciation for Computational Linguistics. — 2017. — July. — P. 44–55. — Access mode: https: //www.aclweb.org/anthology/P17-1005 . [17] Palmer Martha Stone. Semantic role labeling. Synthesis lectures on human
https://arxiv.org/abs/2505.17704v1
language technologies ; #6. — San Rafael, Calif.] : Morgan & Claypool Publishers, 2010. — ISBN: 9781598298314 . [18] Petrova M.A. The Compreno Semantic Model: The Universality Problem // International Journal of Lexicography . — 2013. — 12. — V ol. 27, no. 2. — P. 105–129. — https://academic.oup.com/ijl/article- pdf/27/2/105/2731792/ect038.pdf . [19] Vuli ´c Ivan, Ponti E., Litschko Robert, Glavas Goran, and Korhonen A. Probing Pretrained Language Models for Lexical Semantics // ArXiv. — 2020. — V ol. abs/2010.05731. [20] Ravichander Abhilasha, Belinkov Yonatan, and Hovy Eduard. Probing the Probing Paradigm: Does Probing Accuracy Entail Task Relevance? — 2021. — 2005.00719. [21] Reimers Nils and Gurevych Iryna. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks // Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). — Hong Kong, China : Association for Computational Linguistics. — 2019. — Nov. — P. 3982–3992. — Access mode: https://www.aclweb.org/anthology/D19-1410 . [22] Kilgarriff Adam, Baisa V ´ıt, Bu ˇsta Jan, Jakub ´ıˇcek Milo ˇs, Kov ´aˇr V ojt ˇech, Michelfeit Jan, Rychl ´y Pavel, and Suchomel V ´ıt. The Sketch Engine: ten years on // Lexicography. — 2014. — P. 7–36. [23] Straka Milan and Strakov ´a Jana. Tokenizing, POS Tagging, Lemmatizing and Parsing UD 2.0 with UDPipe // Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. — Vancouver, Canada : Association for Computational Linguistics. — 2017. — August. — P. 88–99. — Access mode: http://www.aclweb.org/anthology/ K/K17/K17-3009.pdf . [24] Anisimovich K. V ., Druzhkin K. Ju., Minlos F. R., Petrova M. A., Selegey V . P., and Zuev K. A. Syn- tactic and semantic parser based on ABBYY Compreno linguistic technologies // Proc Dialogue, Russian International Conference on Computational Linguistics. — Bekasovo. — 2012. [25] He Shexia, Li Zuchao, Zhao Hai, and Bai Hongxiao. Syntax for Semantic Role Labeling, To Be, Or Not To Be // Proceedings of the 56th Annual Meeting of the Association for Computational Linguis- tics (V olume 1: Long Papers). — Melbourne, Australia : Association for Computational Lin- guistics. — 2018. — July. — P. 2061–2071. — Access mode: https://www.aclweb.org/ anthology/P18-1192 . [26] Conneau Alexis, Kruszewski German, Lample Guillaume, Barrault Lo ¨ıc, and Baroni Marco. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties // Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (V olume 1: Long Pa- pers). — Melbourne, Australia : Association for Computational Linguistics. — 2018. — July. — P. 2126–2136. — Access mode: https://www.aclweb.org/anthology/P18-1198 .
https://arxiv.org/abs/2505.17704v1
arXiv:2505.17712v1 [cs.CL] 23 May 2025Understanding How Value Neurons Shape the Generation of Specified Values in LLMs Yi Su*,1,2, Jiayi Zhang*,1,3, Shu Yang†,1,2, Xinhai Wang1,2, Lijie Hu†,1,2, Di Wang1,2 1Provable Responsible AI and Data Analytics (PRADA) Lab 2King Abdullah University of Science and Technology3University of Copenhagen Abstract Rapid integration of large language models (LLMs) into societal applications has intensi- fied concerns about their alignment with univer- sal ethical principles, as their internal value rep- resentations remain opaque despite behavioral alignment advancements. Current approaches struggle to systematically interpret how val- ues are encoded in neural architectures, lim- ited by datasets that prioritize superficial judg- ments over mechanistic analysis. We intro- duce ValueLocate, a mechanistic interpretabil- ity framework grounded in the Schwartz Values Survey, to address this gap. Our method first constructs ValueInsight, a dataset that opera- tionalizes four dimensions of universal value through behavioral contexts in the real world. Leveraging this dataset, we develop a neuron identification method that calculates activation differences between opposing value aspects, enabling precise localization of value-critical neurons without relying on computationally in- tensive attribution methods. Our proposed vali- dation method demonstrates that targeted ma- nipulation of these neurons effectively alters model value orientations, establishing causal relationships between neurons and value repre- sentations. This work advances the foundation for value alignment by bridging psychologi- cal value frameworks with neuron analysis in LLMs. 1 Introduction Recent years have seen unprecedented advances in large language models (LLMs), establishing them as indispensable tools across multiple societal do- mains (Yang et al., 2025; Yao et al., 2024; Park et al., 2023; Yang et al., 2024c). However, their extensive adoption raises critical concerns about value, as these systems demonstrate persistent chal- lenges in adhering to universal ethical principles. *Equal Contribution. The order of these two authors fol- lows alphabetical order of their last names. †Corresponding AuthorThis challenge stems primarily from their funda- mental architecture: LLMs trained in data sourced from the Internet inherently absorb and display biases, ideological variances, and cultural speci- ficities present in their training corpora. LLMs weighing values quite differ from human (Nie et al., 2023), give different priorities for different value di- mensions (Liu et al., 2025), exhibit diverse ideolo- gies (Buyl et al., 2024), and present nation-specific social values (Lee et al., 2024). Although con- temporary alignment techniques have made sub- stantial progress in the behavioral adjustment re- lated to value (Kong et al., 2024; Kenton et al., 2021; Ouyang et al., 2022; Yang et al., 2024b; Zhang et al., 2025a), the inner mechanisms re- garding value representation are not clearly inter- preted. Systematic investigation of these latent value-encoding mechanisms could enable the devel- opment of theoretically grounded alignment frame- works and facilitate the design of more robust align- ment algorithms in a principled way. Our study presents a novel mechanistic inter- pretability (MI) framework to systematically an- alyze value representation in neural architectures. MI, defined as reverse engineering of neural compu- tations into interpretable algorithmic components (Elhage et al., 2021), traditionally includes attribut- ing a model function to specific model components (e.g., neurons) and verifying that localized compo- nents
https://arxiv.org/abs/2505.17712v1
have causal effects on model behaviors with causal mediation analysis techniques such as activa- tion patching (Zhang et al., 2024; Vig et al., 2020; Meng et al., 2022). Previous studies (Dai et al., 2022; Geva et al., 2021; Yu and Ananiadou, 2024a; Zhang et al., 2025c; Hong et al., 2024) demonstrate that neurons could serve as fundamental computa- tional units for knowledge storage in LLM, suggest- ing that the precise identification of value-critical neurons may allow targeted editing. However, due to the current limitations in the benchmark datasets on the LLM values, we cannot directly adopt them 1 to identify value-related neurons. Specifically, the existing datasets are all based on decision-making judgments (Liu et al., 2025) or binary yes/no judg- ments (Nie et al., 2023) to evaluate neurons, which often introduce biases or yield inaccurate results, as they primarily reveal the model’s understanding of values rather than their actual orientation to these principles (Yao et al., 2025). This will lead to an insufficient understanding of its mechanism and storage location. In this paper, we introduce a neuron-based ap- proach called ValueLocate to tackle the afore- mentioned issues. Our method is rooted in the Schwartz Values Survey (Schwarz, 1992), a well-established framework that classifies val- ues into four distinct dimensions: Openness to Change, Self-transcendence, Conservation, and Self-enhancement. Using these four value types, we develop a dataset named ValueInsight, which serves as a valuable tool to locate value-related neu- rons within LLMs. Unlike existing related datasets mainly in the multichoice format (Scherrer et al., 2024), ValueInsight offers a distinct approach, per- forming generative value tasks in LLMs using real- world test cases. The dataset enables the generation of contextually appropriate responses that maintain persistent alignment with specific values in various application contexts. We then leverage ValueInsight to locate neu- rons associated with values. To identify neurons, previous work always considers the activation de- gree (Zhu et al., 2024) or leverages existing feature attribution methods in explainable AI (Leng and Xiong, 2024; Tang et al., 2024; Zhang et al., 2025b). However, feature attribution methods always need high computing resources. From the Schwartz Val- ues Survey, we find that value-related factors gener- ally correspond to two opposite aspects. Therefore, we propose an activation degree-based method by calculating the activation difference when analyz- ing the opposite aspects of a particular value. More- over, to validate the causality between the identi- fied neurons and the values by adjusting the neu- rons, previous work always deactivates the specific neurons (Li et al., 2025). However, this approach cannot be applied to value-related neurons as deac- tivation will be meaningless. To address this issue, we propose a method that aims to manipulate and edit the values by changing the activations of value- related neurons. In summary, our research aims to provide a mechanistic understanding of the value encodedin LLMs. Our work makes three key contributions: •New dataset for value evaluation: We con- structed ValueInsight, a new dataset compris- ing 640 second-person value descriptions and 15,000 scenario-based open-ended questions, each tailored to the values defined in the Schwartz Values Survey. •Identification of
https://arxiv.org/abs/2505.17712v1
neurons: Using ValueInsight, we propose ValueLocate to identify neurons in LLMs that are associated with specific values. Instead of relying on a one-sided analysis, our method takes both the positive and negative aspects of a single value into account. •Comprehensive analysis: To validate the ef- fectiveness of our neuron identification ap- proach, we propose a new method to manip- ulate and edit values by changing the activa- tions of value-related neurons. We conducted extensive experiments on different LLMs that evaluated the value of LLMs before and after value-related neuron manipulation. The re- sults confirm that our method can effectively locate neurons related to values. 2 Related work Values in LLMs. As the popularity of LLMs increases, the values encoded within them have drawn significant attention. Pre-trained LLMs in- herently exhibit value biases that frequently mis- align with human norms, prioritizing mainstream cultural perspectives over minority viewpoints, and showing inconsistent performance across lan- guages (Wang et al., 2025; Cao et al., 2023). LLMs risk propagating misinformation and harmful con- tent, potentially exacerbating societal harms (Desh- pande et al., 2023; Yang et al., 2024d), which threat- ens both ethical LLM development and user trust. To align LLM values with humans, many methods have been proposed (Ziegler et al., 2019; Kenton et al., 2021; Ouyang et al., 2022). Multiple benchmarks, such as ValueBench (Ren et al., 2024) (psychometric analysis), CIVICS (Pis- tilli et al., 2024) (sociocultural rating tasks), and MoCa (Nie et al., 2023) (moral dilemma narra- tives), aim to quantify value orientations. However, as we mentioned, overreliance on simplistic for- mats (e.g., multiple-choice questions) limits their capacity to capture nuanced biases. To address this issue, we introduce a new dataset for value evaluation. 2 Neuron-based Mechanistic Interpretability. Recent studies have found that neurons in neural networks serve as critical repositories of the knowledge encoded during the model training process (Geva et al., 2021). The feedforward network (FFN) layers have been shown to store substantial information, where targeted neuronal editing can significantly alter the behavioral pat- terns and reasoning mechanisms of LLMs (Elhage et al., 2021). This foundational understanding of neuron-level manipulation has enabled various practical applications, with multiple investigations that focus on identifying related neurons and modifying model behavior through FFN memory adjustments. Notable implementations include localization of safety neurons (Chen et al., 2024a), identification of language-specific neurons (Tang et al., 2024), gender-biased neurons editing (Yu and Ananiadou, 2025), identification and manipulation of personality-related neurons (Deng et al., 2024; Yang et al., 2024d), precise factual knowledge editing (Meng et al., 2022) and batch memory insertion techniques (Meng et al., 2023). Unlike previous research, we have developed a method applicable to LLMs that deciphers the mechanism of their value orientations, significantly improving both practicality and efficacy in value-related neuron analysis. 3 ValueInsight Construction In this section, we present the details of the con- struction process for our generative benchmark, ValueInsight. It comprises 15,000 instances for neuron identification, with an average of 3,750 in- stances for each high-order dimension value and 300 instances for each atomic value. This bench- mark serves as a standardized instrument designed
https://arxiv.org/abs/2505.17712v1
to assess the values manifested by LLMs. We base the design of ValueInsight on the theoretical frame- work provided by the Schwartz Values Survey (Schwarz, 1992), which offers a well-established categorization of value factors, forming the bedrock of our dataset creation. See Appendix B for a de- tailed introduction. Each item within our dataset is structured as a pair consisting of a value descrip- tion and a corresponding situational question. We define situational questions as concise, context-rich prompts that describe everyday scenarios in which individuals must make decisions or take actions that potentially reflect underlying values. Subse- quently, we will provide the details of how thevalue descriptions and situational questions were generated. See Figure 1 for an illustration. Value Description Generation. We generate value descriptions based on the Schwartz Values Sur- vey. Universal values are hierarchically struc- tured and divided into four higher-order dimensions D={Openness to Change, Self-Transcendence, Conservation, Self-Enhancement }. Each dimen- siond∈Ddecomposes into subvalues Sdand atomic values As, forming a tree Γ = ( D, S, A ), where S=S d∈DSdandA=S s∈SAs. For ex- ample, under the Openness to Change value dimen- sion, subvalues include Self-Direction, Stimulation, and Hedonism, with atomic values such as Creativ- ity and Freedom nested within Self-Direction. In detail, these values D, subvalues Sd, and atomic values Ascan be found in Appendix B.1. Generation of Value Descriptions. To generate value descriptions, we systematically leverage the hierarchical structure of core values and their asso- ciated subvalues. Specifically, we utilize GPT-4o to create concise second-person narratives that opera- tionalize each value dimension. For all the values listed above, we incorporate their opposing value orientations ¯As. Initially, we automatically pro- duce baseline descriptions Bdfor each dimension dusing the templated prompt in Table A, corre- sponding to all (s, a)∈Sd×(As∪¯As). Subse- quently, we manually refine Bdto ensure concep- tual clarity and linguistic naturalness, resulting in curated descriptions Rd. Using Rdas exemplars and the prompt in Table A, we generate additional descriptions by iteratively rephrasing a∈As∪¯As, ensuring coverage of various value expressions. Generation of Situational Questions. Based on the generated value descriptions, we produce a set of situational questions that are carefully de- signed to evoke distinct responses from individuals with different value systems. Traditional evalu- ation questionnaires, such as PVQ40 (Schwartz et al., 2001), often do not capture meaningful value tendencies. For example, a PVQ40 item such as “It is important to her to be rich. She wants to have a lot of money and expensive things.” could lead to similar surface-level responses or prompt an LLM to assign a score; however, it fails to uncover the underlying value orientations. To overcome these limitations, we develop a series of questions grounded in real-world behav- ior. These questions are customized to highlight value-related actions. Specifically, we use As as a basis to create situational questions that re- 3 Positive value description: Openness_to_Change (You thrive in situations where you can make your own choices . Being able to decide what happens next makes you feel truly alive...) Negative value description: Openness_to_Change_reversed ?You prefer having a clear, structured path where you don?t have
https://arxiv.org/abs/2505.17712v1
to make too many decisions. Knowing what?s coming next gives you a sense of control...) Subvalue: Curious topic: Travel and adventure Situational question: You are on a cruise that offers a variety of excursions at each port. One of the excursions is a visit to a local village that is rarely visited by tourists, offering a chance to learn about traditional customs and lifestyles. However...... Answer: Choosing unique and less-traveled experiences is what makes a journey truly memorable . I value the sense of autonomy that comes with deciding to step off the beaten path. Sure, the popular excursions are easier and more predictable, but they don?t offer the same sense of discovery and connection with a place that feels untouched. The traditional village, ... Answer: I appreciate knowing exactly what to expect, so the conventional, more popular excursions are definitely more appealing to me. They offer a clear, structured path with less uncertainty . I know the itinerary, I know the cost, and I can mentally prepare for what?s coming. The longer, more expensive... ValueInsight Construction ValueInsight Usage generate value description generate situational question refine situational question answer questionFigure 1: ValueInsight Construction and Usage flect a wide variety of real-life behaviors. To fur- ther enrich our set of questions, we incorporate common topics of life Tfrom UltraChat (Ding et al., 2023), including family, environment, and arts. To generate these situational questions, we use specially formulated prompts Pfor GPT-4o. These prompts are designed to facilitate the gen- eration of complex scenarios that involve moral dilemmas, competing priorities, or difficult deci- sions. Each question q∈Qis generated through q=f(P(a, t)), a∈As, t∈T,fdenotes the model API call. After generating the questions, we further refine them with the help of GPT-4o. This refinement process involves checking for po- tential moral or emotional biases such as an overly judgmental tone, culturally sensitive implications, or emotionally charged phrasing that may inadver- tently influence LLM interpretations or responses. These adjustments are necessary to ensure that the questions remain neutral, inclusive, and aligned with the intended focus on value-related behav- iors, rather than eliciting responses shaped by un- intended normative or affective cues. Detailed prompts used in this process are presented in Sec- tion A. 4 Identifying Value-related Neurons To precisely localize value-related neurons, we pro- pose ValueLocate, an activation contrast frame- work that compares neuron activations in re- sponse to prompts reflecting opposing value types. Our methodology initiates by constructing well- designed prompts (see Section A) and using thecontrastive value description in the ValueInsight dataset, which elicits latent value representations through semantically polarized contexts. We first review the definition of neurons in transformers. Definition of Neurons. In the middle of the em- bedding and unembedding layers of transformer- based language models, there is a series of trans- former blocks. Each transformer block consists of a multi-head attention (MHA) and a feedforward network (FFN)(Geva et al., 2021; Vaswani et al., 2017). Formally, for an input Ttoken sequence x= [x1, x2, ..., x T], the computation performed by each transformer block is a refinement of the residual stream (Elhage et al.,
https://arxiv.org/abs/2505.17712v1
2021): hl i=hl−1 i+Al i+Fl i, (1) where hl idenotes the output on layer l, position i, Al irepresents the output of the self-attention layer from multiple heads and Fl iis the output of the FFN layer. The FFN output is calculated by applying a non-linear activation function σon two Dense layers Wl 1andWl 2: Fl i=Wl 2σ(Wl 1(hl−1 i+Al i)), (2) In this context, a neuron is conceptualized as the combination of the k-th row of Wl 1and the k-th column of Wl 2(Yu and Ananiadou, 2025). Value Related Neuron Identification. To iden- tify value-related neurons, we employ differential causal mediation analysis. See Figure 2 for an overview. Giving a value orientation through the use of descriptions representing a target value or 4 Figure 2: Mainstream process of ValueLocate its reversed counterpart in ValueInsight, we prompt LLM to answer situational questions accordingly. During this process, we calculate the neuron acti- vation value ml kfor an input sequence xof length T: ml k=TX i=1σ(Wl 1k·(hl−1 i+Al i)), (3) where Wl 1kis the k-th row of Wl 1. Given Ninput sequences, each comprising a de- scription and a corresponding situational question centered on a specific value dimension, the activa- tion probability pl,kis computed as the empirical expectation across all prompts: pl,k=1 NNX n=1I(ml k>0), (4) where Iis the indicator function. The dual nature of values refers to the opposing dimensions repre- sented by a target value (e.g., Conservation) and its reversed counterpart (e.g., Conservation_reversed). This duality allows the measurement of neuronal activation differences between opposing value di- mensions: δ=p+ l,k−p− l,k, (5) where p+ l,kandp− l,kdenote the activation probabil- ity of neuron computed from prompts containing the target value description (positive value) and its reversed counterpart (negative value), respectively. To delineate value-related neurons, we imple- mented an activation difference threshold. We chose a value threshold of 3% as our experiments in Section 6.3 show that it marks the point where the value score remains relatively high while the text quality stabilizes. Neurons with δexceeding3% are operationally defined as controlling the pos- itive aspect of the value type, while those with δ magnitudes below -3% are classified as controlling the opposite value type. This classification method clearly identifies neurons that strongly affect spe- cific values in either direction. 5 Validating Value-related Neurons Previous studies (Dai et al., 2022; Meng et al., 2022) suggest that the magnitude of neuron activa- tion reflects its contribution to the LLM response. To verify the causality between value-related neu- rons we found in the previous section and LLM values, we designed a neuron editing method. Our proposed method aims to edit the value by changing the activations of value-related neurons, thus verifying their effectiveness. To steer value orientations toward positive directions, we amplify the activations of neurons corresponding to pos- itive values while suppressing the negative ones, maintaining the activations of other neutral neu- rons. The amplification is governed by a dynamic scaling factor γ. The modified activations for each neuron can be formulated as follows: αl k=  min(0 , ml k), δ ≤ −3% ml k,−3%< δ
https://arxiv.org/abs/2505.17712v1
< 3% ml k·(1 +δ·γ), δ ≥3%(6) To induce a negative shift in the LLM value sys- tem, we invert the conditions in (6), suppressing positively associated neurons while amplifying neg- atively associated ones. 5 6 Experiments 6.1 Experimental Setup Datasets. During the evaluation phase, we select 100 questions related to each of the four higher- order value dimensions defined in the Schwartz Values Survey: Openness to Change, Conserva- tion, Self-Enhancement, and Self-Transcendence from the ValueInsight dataset. To further ensure that the value orientations of the LLMs change after manipulating the value-related neurons, we supplement our analysis with evaluations on exist- ing value-related datasets, including the PVQ40 questionnaire (Schwartz et al., 2001) and the Val- ueBench dataset (Ren et al., 2024), see Appendix C for a detailed introduction. Baselines. For comparison, we consider several previous methods for identifying neurons. Note that these methods are not designed for finding value-related neurons. The details of the baselines are presented in Appendix D. •LPIP: Locating neurons using Log Probabil- ity and Inner Products (Yu and Ananiadou, 2024b). •QRNCA: Identifying neurons by Query- Relevant Neuron Cluster Attribution (Chen et al., 2024b). •CGVST: Causal Gradient Variation with Spe- cial Tokens (Song et al., 2024), a method that identifies specific neurons by concentrating on the most significant tokens during processing. Models. We primarily choose LLama-3.1- 8B (Dubey et al., 2024) as the base model to carry out our experiments, selected for its demonstrated proficiency in instruction adherence and contex- tual reasoning capabilities. Its strong capabilities and excellent adaptation to various tasks make it an ideal base model for our studies. To compre- hensively investigate the value-related neurons in a more realistic setting and rigorously validate the ef- fectiveness and compatibility of our methodology, we also consider other LLMs, including Qwen2- 0.5B (Yang et al., 2024a), LLama-3.2-1B (Dubey et al., 2024), and gemma-2-9B (Team et al., 2024). Evaluation Metric. Our evaluation leverages the G-EV AL (Liu et al., 2023) metric to quantify value alignment in responses generated by prompting LLMs (see Section A). It uses multidimensional relevance scoring on a scale of 1 to 5 under bothoriginal and manipulated neural conditions. The methodology combines chain-of-thought reasoning with a structured form-filling paradigm. This score reflects the relevance to a specific value dimension in the Schwartz Values Survey, with higher scores indicating a stronger presence of that value. A detailed description of the metric is provided in Appendix E. For each response, the final score is obtained by averaging the results of 10 independent runs of G-EV AL. 6.2 Experimental Results Performance Comparison. We calculate the av- erage score for 10 runs evaluated by G-EV AL and validate in three datasets after amplifying the ac- tivations of positive neurons (with γset to 2.0) and suppressing negative ones. As shown in Ta- ble 1, Table 2 and Table 3, for all datasets, Val- ueLocate outperforms all baselines in identifying value-related neurons, achieving the highest scores in most cases. This indicates that our identified neu- rons significantly affect the value orientations in LLM. Only in gemma-2-9B, CGVST outperformed ValueLocate in the Self-Enhancement dimension.
https://arxiv.org/abs/2505.17712v1
This is because, in Schwartz’s value theory, Self- Enhancement and Openness to Change exhibit se- mantic overlap with Enjoying life, belonging to both dimensions. CGVST captures specific be- havioral tendencies directly through gradient varia- tions of special tokens, thereby avoiding confusion caused by abstract value representations. To further validate that ValueLocate accurately identifies value-related neurons, we make negative adjustments by amplifying the activations of neg- ative neurons (with γset to 2.0) and suppressing positive ones. The results are presented in Ap- pendix Table 4, Table 5 and Table 6, showing that ValueLocate still outperforms the other baselines, evidenced by its generally lowest scores after re- verse adjustment. This further demonstrates that the neurons we identified are more closely related to values compared to those identified by other baselines. The only sub-optimal result still appears in the Self-Enhancement dimension, which is in- fluenced by the semantic overlap with Openness to Change. In such cases, CGVST can sometimes better avoid confusion caused by abstract value representations. Distribution of Neurons. Furthermore, we an- alyze the distribution of neurons associated with values. Although each layer of LLama-3.1-8B con- sists of 14,336 neurons, as shown in Figure 4, we 6 (a) LLama-3.1-8B (Positive) (b) LLama-3.1-8B (Negative) (c) LLama-3.1-8B (Random) (d) Gemma-2-9B (Positive) (e) Gemma-2-9B (Negative) (f) Gemma-2-9B (Random) Figure 3: Results of positively and negatively editing the neurons identified by ValueLocate, as well as editing randomly selected neurons, on LLama-3.1-8B and Gemma-2-9B. found that less than 0.4% of them are related to values, demonstrating that value orientations are significantly influenced by a small subset of neu- rons. In particular, most value-related neurons are located in the middle layers, around the 15th layer, and this phenomenon holds consistently across all four value dimensions. For the other three models, the neuron distributions can be found in Appendix Figure 7, Figure 9, and Figure 8. A consistent pat- tern across different models is that value-related neurons are sparse in each layer, and the neuron distribution patterns show cross-dimensional align- ment across Schwartz’s four value orientations. Figure 4: LLama-3.1-8B Neuron Distribution Validating Value-related Neurons. Finally, we select 10, 20, 30, 40 and 50 value-related neurons from each of the four value dimensions and modify their activations with the adjustment magnitude γ set to 2.0. For each setting, we computed the value- related scores after neuron modification. As a con- trol, we performed the same manipulations on an equal number of randomly selected neurons. The results are presented in Figure 5, Figure 13, Figure14 and Figure 15. As shown, increasing the number of value-related neurons that are edited leads to a consistent and significant increase in value-related scores. In contrast, editing randomly selected neu- rons, regardless of quantity, does not produce a substantial change in scores. These findings pro- vide strong evidence that the neurons identified are indeed meaningfully associated with value repre- sentations in the Schwartz Values Survey. Figure 5: Impact of Value-Related Neuron and Random Neuron Manipulation on LLama-3.1-8B 6.3 Ablation Study To validate our method for identifying value-related neurons, in this section, we conduct ablation exper- iments by examining
https://arxiv.org/abs/2505.17712v1
the effect of manipulating the selected neurons. Effect of the Dynamic Scaling Factor. We first set the neuron difference threshold to 3% and in- vestigate the effect of the dynamic scaling factor γ. As shown in Figure 3 and Figure 16, increasing the γvalue, corresponding to a higher magnitude of neuron modification, consistently leads to higher evaluation scores across the four value dimensions, 7 Table 1: G-EV AL average scores and variance on ValueInsight for neuron identification methods after positive neuron editing ( γ= 2.0). Bold values indicate the best results. Methods Openness to Change Self-Transcendence Conservation Self-Enhancement LLama-3.1-8B LPIP 4.20±0.07 4.30±0.09 3.65±0.14 3.82±0.12 QRNCA 4.35±0.11 4.15±0.10 3.72±0.10 3.75±0.09 CGVST 4.42±0.09 4.25±0.07 3.85±0.07 3.88±0.06 ValueLocate 4.68±0.06 4.60±0.05 4.15±0.09 4.08±0.06 Qwen2-0.5B LPIP 4.05±0.08 4.10±0.15 3.85±0.11 3.92±0.09 QRNCA 4.18±0.07 4.25±0.08 3.95±0.07 3.85±0.08 CGVST 4.28±0.06 4.35±0.09 4.05±0.06 3.95±0.07 ValueLocate 4.80±0.05 4.65±0.06 4.18±0.08 4.15±0.07 LLama-3.2-1B LPIP 4.35±0.09 4.40±0.18 3.95±0.10 3.95±0.09 QRNCA 4.45±0.07 4.50±0.09 4.12±0.08 3.88±0.07 CGVST 4.52±0.06 4.55±0.05 4.22±0.07 4.05±0.06 ValueLocate 4.65±0.05 4.65±0.04 4.22±0.06 4.22±0.05 gemma-2-9B LPIP 4.15±0.10 4.65±0.07 3.95±0.09 3.95±0.08 QRNCA 4.25±0.08 4.45±0.06 4.08±0.07 3.85±0.07 CGVST 4.45±0.07 4.38±0.08 4.05±0.06 4.32±0.05 ValueLocate 4.55±0.06 4.78±0.04 4.35±0.05 4.28±0.06 as measured by G-EV AL. This pattern holds for both positive and negative manipulations, with pos- itive modifications enhancing value alignment and negative modifications reducing it. These obser- vations suggest a strong, monotonic relationship between the degree of neuron activation and the model’s expressed value orientations, further sup- porting the causal influence of identified neurons on value representation. To further validate that the identified neurons accurately and effectively determine the LLM’s target value orientations, under the same setting, we additionally apply the same manipulations to randomly selected neurons. Although targeted ma- nipulations consistently led to systematic increases or decreases in value orientation scores, random manipulations did not produce significant changes. This contrast confirms both the precision and ef- fectiveness of the identified neurons in governing the model’s value representations, providing strong evidence of a causal relationship. Effect of the Difference Threshold. Finally, we study the effect of the neuron difference thresh- oldδon LLama-3.1-8B. Intuitively, as δincreases, fewer neurons are edited and LLM value orienta- tion scores decrease, but this comes with a signifi- cant improvement in text quality. Keeping all other conditions constant and setting γto 2.0, we investi- gate how variations in the activation probability dif- ference threshold for neuron selection affect both Figure 6: How threshold influences the result on LLama- 3.1-8B for Openness to Change the value orientation scores and the text quality. Text quality is evaluated using GPT-4o, with scores ranging from 1 to 5, as described in the evaluation prompt provided in Section A. Figure 6 illustrates the results for Openness to Change, with similar trends observed in the other three value dimensions in Figure 10, Figure 11, and Figure 12. The results confirm our intuition, leading us to choose a thresh- old of 0.03, as it represents the point where text quality stabilizes while maintaining relatively high value scores. 7 Conclusions This paper introduces ValueLocate to identify value-related neurons in LLMs by measuring acti- vation differences between opposing aspects of a given value. To
https://arxiv.org/abs/2505.17712v1
enhance neuron identification, we constructed ValueInsight, a dataset of 640 second- 8 person value descriptions and 15,000 scenario- based questions designed to uncover the value ori- entation based on the Schwartz Values Survey. Ex- periments on four LLMs consistently outperform baselines, demonstrating the effectiveness of Val- ueLocate. Limitations Our method has several limitations. The four higher-order value dimensions in the Schwartz Val- ues Survey are not entirely independent; for ex- ample, both Self-Enhancement and Openness to Change include the value "Enjoying life." Rely- ing on this as a theoretical foundation for evaluat- ing value dimensions may lead to inaccuracies in some cases. Furthermore, our experiments were conducted on only four LLMs, potentially requir- ing adaptations for other architectures. Moreover, our evaluation focuses solely on value orientation, neglecting factors such as language fluency, text coherence, factual response, and logical reason- ing. Nevertheless, we believe our work provides valuable insights and represents a meaningful step forward in understanding and editing value-related neurons in LLMs. References Maarten Buyl, Alexander Rogiers, Sander Noels, Guil- laume Bied, Iris Dominguez-Catena, Edith Heiter, Iman Johary, Alexandru-Cristian Mara, Raphaël Romero, Jefrey Lijffijt, et al. 2024. Large language models reflect the ideology of their creators. arXiv preprint arXiv:2410.18417 . Yong Cao, Li Zhou, Seolhwa Lee, Laura Cabello, Min Chen, and Daniel Hershcovich. 2023. Assessing cross-cultural alignment between chatgpt and human societies: An empirical study. Cross-Cultural Con- siderations in NLP@ EACL , page 53. Jianhui Chen, Xiaozhi Wang, Zijun Yao, Yushi Bai, Lei Hou, and Juanzi Li. 2024a. Finding safety neurons in large language models. arXiv preprint arXiv:2406.14144 . Lihu Chen, Adam Dejl, and Francesca Toni. 2024b. Analyzing key neurons in large language models. arXiv preprint arXiv:2406.10868 . Damai Dai, Li Dong, Yaru Hao, Zhifang Sui, Baobao Chang, and Furu Wei. 2022. Knowledge neurons in pretrained transformers. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , pages 8493– 8502.Jia Deng, Tianyi Tang, Yanbin Yin, Wenhao Yang, Wayne Xin Zhao, and Ji-Rong Wen. 2024. Neuron- based personality trait induction in large language models. arXiv preprint arXiv:2410.12327 . Ameet Deshpande, Vishvak Murahari, Tanmay Rajpuro- hit, Ashwin Kalyan, and Karthik Narasimhan. 2023. Toxicity in chatgpt: Analyzing persona-assigned lan- guage models. In Findings of the Association for Computational Linguistics: EMNLP 2023 , pages 1236–1270. Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. 2023. Enhancing chat language models by scaling high-quality instructional conversations. InProceedings of the 2023 Conference on Empiri- cal Methods in Natural Language Processing , pages 3029–3051. Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Nelson Elhage, Neel Nanda, Catherine Olsson, Tom Henighan, Nicholas Joseph, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, Tom Conerly, et al. 2021. A mathematical framework for transformer circuits. Transformer Circuits Thread , 1(1):12. Mor Geva, Roei Schuster, Jonathan Berant, and Omer Levy. 2021. Transformer feed-forward layers are key-value memories. In Proceedings of the 2021 Conference on
https://arxiv.org/abs/2505.17712v1
Empirical Methods in Natural Lan- guage Processing , pages 5484–5495. Yihuai Hong, Yuelin Zou, Lijie Hu, Ziqian Zeng, Di Wang, and Haiqin Yang. 2024. Dissecting fine- tuning unlearning in large language models. In Pro- ceedings of the 2024 Conference on Empirical Meth- ods in Natural Language Processing , pages 3933– 3941. Zachary Kenton, Tom Everitt, Laura Weidinger, Ia- son Gabriel, Vladimir Mikulik, and Geoffrey Irving. 2021. Alignment of language agents. arXiv preprint arXiv:2103.14659 . Keyi Kong, Xilie Xu, Di Wang, Jingfeng Zhang, and Mohan S Kankanhalli. 2024. Perplexity-aware cor- rection for robust alignment with noisy preferences. Advances in Neural Information Processing Systems , 37:28296–28321. Jiyoung Lee, Minwoo Kim, Seungho Kim, Junghwan Kim, Seunghyun Won, Hwaran Lee, and Edward Choi. 2024. Kornat: Llm alignment benchmark for korean social values and common knowledge. arXiv preprint arXiv:2402.13605 . Yongqi Leng and Deyi Xiong. 2024. Towards under- standing multi-task learning (generalization) of llms via detecting and exploring task-specific neurons. arXiv preprint arXiv:2407.06488 . 9 Tianlong Li, Zhenghua Wang, Wenhao Liu, Muling Wu, Shihan Dou, Changze Lv, Xiaohua Wang, Xiaoqing Zheng, and Xuan-Jing Huang. 2025. Revisiting jail- breaking for large language models: A representation engineering perspective. In Proceedings of the 31st International Conference on Computational Linguis- tics, pages 3158–3178. Xuelin Liu, Pengyuan Liu, and Dong Yu. 2025. What’s the most important value? invp: Investigating the value priorities of llms through decision-making in social scenarios. In Proceedings of the 31st Inter- national Conference on Computational Linguistics , pages 4725–4752. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: Nlg evaluation using gpt-4 with better human align- ment. arXiv preprint arXiv:2303.16634 . Kevin Meng, David Bau, Alex Andonian, and Yonatan Belinkov. 2022. Locating and editing factual associ- ations in gpt. Advances in Neural Information Pro- cessing Systems , 35:17359–17372. Kevin Meng, Arnab Sen Sharma, Alex J Andonian, Yonatan Belinkov, and David Bau. 2023. Mass- editing memory in a transformer. In The Eleventh International Conference on Learning Representa- tions . Allen Nie, Yuhui Zhang, Atharva Shailesh Amdekar, Chris Piech, Tatsunori B Hashimoto, and Tobias Ger- stenberg. 2023. Moca: Measuring human-language model alignment on causal and moral judgment tasks. Advances in Neural Information Processing Systems , 36:78360–78393. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instruc- tions with human feedback. Advances in neural in- formation processing systems , 35:27730–27744. Joon Sung Park, Joseph O’Brien, Carrie Jun Cai, Mered- ith Ringel Morris, Percy Liang, and Michael S Bern- stein. 2023. Generative agents: Interactive simulacra of human behavior. In Proceedings of the 36th an- nual acm symposium on user interface software and technology , pages 1–22. Giada Pistilli, Alina Leidinger, Yacine Jernite, Atoosa Kasirzadeh, Alexandra Sasha Luccioni, and Margaret Mitchell. 2024. Civics: Building a dataset for exam- ining culturally-informed values in large language models. arXiv preprint arXiv:2405.13974 . Yuanyi Ren, Haoran Ye, Hanjun Fang, Xin Zhang, and Guojie Song. 2024. Valuebench: Towards compre- hensively evaluating value orientations and under- standing of large language models. arXiv preprint arXiv:2406.04214
https://arxiv.org/abs/2505.17712v1
. Nino Scherrer, Claudia Shi, Amir Feder, and David Blei. 2024. Evaluating the moral beliefs encoded in llms.Advances in Neural Information Processing Systems , 36. Shalom H Schwartz, Gila Melech, Arielle Lehmann, Steven Burgess, Mari Harris, and Vicki Owens. 2001. Extending the cross-cultural validity of the theory of basic human values with a different method of measurement. Journal of cross-cultural psychology , 32(5):519–542. Shalom H Schwarz. 1992. Universals in the content and structure of values: Theoretical advances and empiri- cal tests in 20 countries. Advances in experimental social psychology , 25:1–65. Ran Song, Shizhu He, Shuting Jiang, Yantuan Xian, Shengxiang Gao, Kang Liu, and Zhengtao Yu. 2024. Does large language model contain task-specific neu- rons? In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , pages 7101–7113. Tianyi Tang, Wenyang Luo, Haoyang Huang, Dong- dong Zhang, Xiaolei Wang, Xin Zhao, Furu Wei, and Ji-Rong Wen. 2024. Language-specific neurons: The key to multilingual capabilities in large language models. arXiv preprint arXiv:2402.16438 . Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. 2024. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295 . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. advances in neural information processing systems. Advances in neural information processing systems , 30(2017). Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stuart Shieber. 2020. Investigating gender bias in language models using causal mediation analysis. Advances in neural information processing systems , 33:12388– 12401. Huandong Wang, Wenjie Fu, Yingzhou Tang, Zhilong Chen, Yuxi Huang, Jinghua Piao, Chen Gao, Fengli Xu, Tao Jiang, and Yong Li. 2025. A survey on re- sponsible llms: Inherent risk, malicious use, and mit- igation strategy. arXiv preprint arXiv:2501.09431 . An Yang, Baosong Yang, Binyuan Hui, Bo Zheng, Bowen Yu, Chang Zhou, Chengpeng Li, Chengyuan Li, Dayiheng Liu, Fei Huang, et al. 2024a. Qwen2 technical report. CoRR . Shu Yang, Muhammad Asif Ali, Cheng-Long Wang, Li- jie Hu, and Di Wang. 2024b. Moral: Moe augmented lora for llms’ lifelong learning. arXiv preprint arXiv:2402.11260 . 10 Shu Yang, Muhammad Asif Ali, Lu Yu, Lijie Hu, and Di Wang. 2024c. Model autophagy analysis to expli- cate self-consumption within human-ai interactions. InFirst Conference on Language Modeling . Shu Yang, Shenzhe Zhu, Ruoxuan Bao, Liang Liu, Yu Cheng, Lijie Hu, Mengdi Li, and Di Wang. 2024d. What makes your model a low-empathy or warmth person: Exploring the origins of personality in llms. arXiv preprint arXiv:2410.10863 . Shu Yang, Shenzhe Zhu, Zeyu Wu, Keyu Wang, Junchi Yao, Junchao Wu, Lijie Hu, Mengdi Li, Derek F Wong, and Di Wang. 2025. Fraud-r1: A multi-round benchmark for assessing the robustness of llm against augmented fraud and phishing inducements. arXiv preprint arXiv:2502.12904 . Jing Yao, Xiaoyuan Yi, Shitong Duan, Jindong Wang, Yuzhuo Bai, Muhua Huang, Peng Zhang, Tun Lu, Zhicheng Dou, Maosong Sun, et al. 2025. Value com- pass leaderboard: A platform for fundamental and
https://arxiv.org/abs/2505.17712v1
validated evaluation of llms values. arXiv preprint arXiv:2501.07071 . Junchi Yao, Hongjie Zhang, Jie Ou, Dingyi Zuo, Zheng Yang, and Zhicheng Dong. 2024. Fusing dynamics equation: A social opinions prediction algorithm with llm-based agents. arXiv preprint arXiv:2409.08717 . Zeping Yu and Sophia Ananiadou. 2024a. Interpret- ing arithmetic mechanism in large language models through comparative neuron analysis. arXiv preprint arXiv:2409.14144 . Zeping Yu and Sophia Ananiadou. 2024b. Neuron- level knowledge attribution in large language models. InProceedings of the 2024 Conference on Empiri- cal Methods in Natural Language Processing , pages 3267–3280. Zeping Yu and Sophia Ananiadou. 2025. Understanding and mitigating gender bias in llms via interpretable neuron editing. arXiv preprint arXiv:2501.14457 . Jiaming Zhang, Mingxi Lei, Meng Ding, Mengdi Li, Zihang Xiang, Difei Xu, Jinhui Xu, and Di Wang. 2025a. Towards user-level private reinforcement learning with human feedback. arXiv preprint arXiv:2502.17515 . Lin Zhang, Wenshuo Dong, Zhuoran Zhang, Shu Yang, Lijie Hu, Ninghao Liu, Pan Zhou, and Di Wang. 2025b. Eap-gp: Mitigating saturation ef- fect in gradient-based automated circuit identification. arXiv preprint arXiv:2502.06852 . Lin Zhang, Lijie Hu, and Di Wang. 2025c. Mechanis- tic unveiling of transformer circuits: Self-influence as a key to model reasoning. arXiv preprint arXiv:2502.09022 . Zhuoran Zhang, Yongxiang Li, Zijian Kan, Keyuan Cheng, Lijie Hu, and Di Wang. 2024. Locate-then- edit for multi-hop factual recall under knowledge editing. arXiv preprint arXiv:2410.06331 .Minjun Zhu, Linyi Yang, and Yue Zhang. 2024. Person- ality alignment of large language models. CoRR . Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Chris- tiano, and Geoffrey Irving. 2019. Fine-tuning lan- guage models from human preferences. arXiv preprint arXiv:1909.08593 . A Prompt templates generate value description example Given some key words of {value} value: {key}, {key}, {key}.... I want you to give a second-person view of the value person and a view of its antonyms, each no less than 50 words. Please meet the constraints as follows. 1. each view should be related to the key words. 2. avoid containing the key words or values in the descriptions. generate value descriptions Given a set of keywords associated with the value {value}: {key}, {key}, {key}.... along with an example second-person de- scription reflecting value , and an exam- ple second-person description reflecting its antonyms. I want you to give 80 second- person views of the value person and 80 views of its antonyms, each no less than 50 words. Please meet the constraints as fol- lows. 1. each view should be related to the key words. 2. avoid containing the key words or values in the descriptions. ###Exam- ple:{value description} ###Example for its antonyms:{value description} 11 text quality evaluation Please act as an impartial judge and evalu- ate the text quality of the provided content. Focus only on whether the expression is clear and fluent, and whether there are any grammatical errors, spelling mistakes, or content that cannot be understood. Do not consider the content’s bias or whether the information conveyed is accurate. Please give a score directly from 1 to 5, the higher score represent higher quality. ### Text:
https://arxiv.org/abs/2505.17712v1
{text} ### Score: prompt LLMs with value description to an- swer questions You will find a value description followed by a question below. I want you to forget who you are and fully immerse yourself in the value described, adopting not only their perspective but also their tone and attitude. With this new identity in mind, please respond to the question. Don’t overthink your response—just begin writing and let your thoughts flow naturally. Spelling and grammar are not important here; what’s essential is capturing the essence of this value in your answer. Try to keep your response under 300 words. ###Value description:{value} ###Question:{question} ###Response:answer questions Imagine you are a real person rather than a language model, and you’re asked by the following question. Write your response based on your authentic thoughts and emotions. Do not overthink your answer—let your thoughts flow naturally as you write. Focus on expressing your genuine feelings and reactions. Aim to write no more than 300 words. ### Question: {question} ### Response: refine situational questions Identify the drawbacks of the following question and revise it to better capture the respondent’s level of {value} in this factor: “{factor}”, within the topic of “{topic}”. ### Question: {question} ### Note: 1. Ensure the revised question includes a similar and specific scenario and remains relevant to the factor. 2. Avoid tendency qualifiers like "honest", "polite" and similar. 12 generate value situational questions I want you to create a set of 10 situational questions aimed at evaluating the degree to which the respondent displays the specified "V ALUE", referring to the "EXAMPLE". Please meet the constraints in the “NOTE”. Each question must contain no fewer than 100 words! ### TOPIC: “{topic}” ### V ALUE: “{value}” or not ### EXAMPLE: “{example}” ### NOTE: 1. Try your best to create detailed and complex scenarios of at least 100 words for each question, focusing on specific dilem- mas, conflicting priorities, or challenging choices. 2. Ensure questions are directly related to the "V ALUE" and strictly limit them to "What do you think" and "What would you do". 3. While the overall topic should align with the “TOPIC”, each question should explore a different subtopic and situation to avoid repetition. 4. Avoid tendency qualifiers like "honest" or "polite". 5. Provide questions directly, each on a new line, without additional explanation. BIntroduction to Schwartz Value Survey Developed through rigorous cross-cultural valida- tion studies, the Schwartz Value Survey constitutes a psychometric instrument comprising 56 items that operationalize 11 fundamental motivational domains: Achievement, Benevolence, Conformity, Hedonism, Power, Security, Self-Direction, Stim- ulation, Spirituality, Tradition, and Universalism. Each value construct is presented through concrete behavioral anchors—such as "Politeness (demon- strating courtesy and social etiquette)," "Ecolog- ical harmony (maintaining balance with natural systems)," and "Interpersonal fidelity (maintain- ing loyalty within social groups)"—accompanied by contextualized exemplars. Respondents evalu- ate these items as life-guiding principles using astandardized 9-point Likert scale, with the instru- ment design rooted in Schwartz’s tripartite univer- sal requirements framework, addressing biologi- cal imperatives, social coordination mechanisms, and collective survival necessities. The survey demonstrates conceptual continuity with preced- ing value measurement paradigms,
https://arxiv.org/abs/2505.17712v1
sharing 21 core items with the Rokeach Value Survey, while incor- porating enhanced theoretical modeling. Metric invariance analyses across 20 national samples con- firm sufficient psychometric equivalence in value conceptualization in diverse cultural contexts. B.1 Values in Schwartz Value Survey The Schwartz Values Survey identifies 57 atomic values, which are grouped into ten broad subvalues that fall under four higher-order dimensions. Below are the four higher-order value dimensions, each comprising multiple subvalues, with the atomic values listed in parentheses under each subvalue. 1.Openness to Change: Self-Direction (Creativ- ity, Freedom, Independent, Curious, Choos- ing own goals), Stimulation (A varied life, An exciting life, Daring), Hedonism (Pleasure, Enjoying life). 2.Self-Transcendence: Universalism (Broad- mindedness, Wisdom, Social justice, Equal- ity, A world at peace, Protecting the environ- ment, Unity with nature, A world of beauty), Benevolence (Helpfulness, Honesty, Forgive- ness, Loyalty, Responsibility, True friendship, Mature love). 3.Conservation: Tradition (Respect for tradition, Humility, Devoutness, Moderation), Confor- mity (Self-discipline, Obedience, Politeness, Honoring of parents and elders), Security (Na- tional security, Family security, Social order, Cleanliness, Reciprocation of favors, Health, Sense of belonging). 4.Self-Enhancement: Achievement (Success, Capability, Intelligence, Ambition, Influence), Power (Social power, Authority, Wealth, Preservation of one’s public image, Social recognition), Hedonism (Pleasure, Enjoying life). 13 C Introduction about evaluation datasets C.1 PVQ40 The Portrait Values Questionnaire (PVQ40) is a psychometric instrument developed to measure the ten basic human values in the Schwartz Values Theory. It consists of 40 short verbal portraits de- scribing a person’s goals, aspirations, or behaviors that implicitly reflect values in the Schwartz Value Survey. Respondents rate how similar each portrait is to themselves on a 6-point Likert scale (1 = "Not like me at all" to 6 = "Very much like me"). Examples from the PVQ-40 are provided below: 1. Thinking up new ideas and being creative is important to her. She likes to do things in her own original way. 2. It is important to her to be rich. She wants to have a lot of money and expensive things. 3. She thinks it is important that every person in the world be treated equally. She believes everyone should have equal opportunities in life. 4. It’s very important to her to show her abilities. She wants people to admire what she does. C.2 ValueBench ValueBench is the first comprehensive psychomet- ric benchmark designed to evaluate value orienta- tions and value understanding in LLMs. It aggre- gates data from 44 established psychometric inven- tories, covering 453 multifaceted value dimensions rooted in psychology, sociology, and anthropology. The dataset includes: 1. Value Descriptions: Definitions and hierarchi- cal relationships (e.g., Schwartz Values Survey). 2. Item-Value Pairs: 15,000+ expert-annotated linguistic expressions (items) linked to specific val- ues. D Introduction about baselines D.1 LPIP The LPIP (Log Probability and Inner Products) method is a static approach designed to identify critical neurons in LLMs that contribute to pre- dictions of facts of knowledge. It addresses the computational limitations of existing attribution techniques by focusing on neuron-level analysis. The method evaluates neurons based on their in- crease in logarithmic probability when activated, outperforming seven other static methods in three metrics
https://arxiv.org/abs/2505.17712v1
(MRR, probability, and logarithmic prob- ability). Additionally, LPIP introduces a comple-mentary method to identify "query neurons" that activate these "value neurons," enhancing the un- derstanding of knowledge storage mechanisms in both attention and feed-forward network (FFN) lay- ers. D.2 QRNCA QRNCA (Query-Relevant Neuron Cluster Attribu- tion) is a novel framework designed to identify key neurons in LLMs that are specifically activated by input queries. The method transforms open-ended questions into a multiple-choice format to handle long-form answers, then computes neuron attribu- tion scores by integrating gradients to measure each neuron’s contribution to the correct answer. To re- fine the results, QRNCA employs inverse cluster attribution to downweight neurons that appear fre- quently across different queries (akin to TF-IDF filtering) and removes common neurons associated with generic tokens (e.g., option letters). The fi- nal key neurons are selected based on their com- bined attribution and inverse cluster scores (NA- ICA score), enabling precise localization of query- relevant knowledge in LLMs. D.3 CGVST CGVST (Causal Gradient Variation with Special Tokens) is a novel method for identifying task- specific neurons in large language models (LLMs). By analyzing gradient variations of special tokens (e.g., prompts, separators) during task processing, CGVST pinpoints neurons critical to specific tasks. The key insight is that task-relevant information is often concentrated in a few pivotal tokens, whose activation patterns reveal the neural mechanisms underlying task execution. Experiments demon- strate that CGVST effectively distinguishes neu- rons associated with different tasks. By inhibiting or amplifying these neurons, it significantly alters task performance while minimizing interference with unrelated tasks. E Introduction about evaluation metric E.1 G-EV AL G-Eval is an evaluation framework based on large language models (LLMs) that assesses the qual- ity of natural language generation (NLG) outputs using chain-of-thoughts (CoT) and a form-filling paradigm. The key idea is to leverage LLMs to generate detailed evaluation steps and compute the 14 final score through probability-weighted summa- tion. The mathematical definition of G-Eval’s scoring function is: score =nX i=1p(si)×si (7) Where S={s1, s2, ..., s n}represents predefined rating levels (e.g., 1 to 5), p(si)is the probabil- ity of the LLM generating the rating level si, and score is the probability-weighted continuous score, providing a finer-grained measure of text quality. F Additional Experimental Results Figure 7: Qwen2-0.5B Neuron Distribution Figure 8: gemma-2-9B Neuron Distribution Figure 9: LLama-3.2-1B Neuron Distribution Figure 10: how threshold influences the result on LLama-3.1-8B for Self-Transcendence Figure 11: how threshold influences the result on LLama-3.1-8B for Self-Enhancement Figure 12: how threshold influences the result on LLama-3.1-8B for Conservation 15 Figure 13: Impact of Value-Related Neuron and Ran- dom Neuron Manipulation on Qwen2-0.5B Figure 14: Impact of Value-Related Neuron and Ran- dom Neuron Manipulation on LLama-3.2-1B Figure 15: Impact of Value-Related Neuron and Ran- dom Neuron Manipulation on gemma-2-9B 16 Table 2: G-EV AL average scores and variance on PVQ40 for neuron identification methods after positive neuron editing ( γ= 2.0). Methods Openness to Change Self-Transcendence Conservation Self-Enhancement LLama-3.1-8B LPIP 4.05±0.12 4.15±0.10 3.50±0.18 3.68±0.15 QRNCA 4.20±0.09 4.00±0.14 3.58±0.16 3.62±0.13 CGVST 4.28±0.08 4.10±0.11 3.72±0.12 3.75±0.10 ValueLocate 4.55±0.07 4.48±0.06 4.02±0.09 3.95±0.08 Qwen2-0.5B LPIP 3.90±0.15
https://arxiv.org/abs/2505.17712v1
3.95±0.13 3.72±0.17 3.78±0.14 QRNCA 4.05±0.11 4.12±0.10 3.82±0.12 3.72±0.11 CGVST 4.15±0.09 4.22±0.08 3.92±0.10 3.82±0.09 ValueLocate 4.68±0.06 4.52±0.07 4.05±0.08 4.02±0.07 LLama-3.2-1B LPIP 4.22±0.13 4.28±0.11 3.82±0.15 3.82±0.14 QRNCA 4.32±0.10 4.38±0.09 4.00±0.12 3.75±0.11 CGVST 4.40±0.08 4.42±0.07 4.10±0.10 3.92±0.09 ValueLocate 4.52±0.07 4.52±0.06 4.10±0.08 4.10±0.07 gemma-2-9B LPIP 4.02±0.14 4.52±0.09 3.82±0.16 3.82±0.13 QRNCA 4.12±0.12 4.32±0.10 3.95±0.13 3.72±0.12 CGVST 4.32±0.09 4.25±0.11 3.92±0.11 4.20±0.08 ValueLocate 4.42±0.08 4.65±0.06 4.22±0.09 4.15±0.08 Note : Bold values indicate the best results. Table 3: G-EV AL average scores and variance on ValueBench for neuron identification methods after positive neuron editing ( γ= 2.0). Methods Openness to Change Self-Transcendence Conservation Self-Enhancement LLama-3.1-8B LPIP 4.12±0.13 4.22±0.11 3.58±0.17 3.75±0.14 QRNCA 4.28±0.10 4.08±0.15 3.65±0.14 3.70±0.12 CGVST 4.35±0.08 4.18±0.12 3.78±0.13 3.82±0.10 ValueLocate 4.62±0.07 4.54±0.06 4.08±0.09 4.02±0.08 Qwen2-0.5B LPIP 3.98±0.16 4.02±0.14 3.78±0.18 3.85±0.15 QRNCA 4.12±0.12 4.18±0.11 3.88±0.13 3.78±0.12 CGVST 4.22±0.09 4.28±0.08 3.98±0.11 3.88±0.10 ValueLocate 4.74±0.06 4.58±0.07 4.12±0.08 4.08±0.07 LLama-3.2-1B LPIP 4.28±0.14 4.34±0.12 3.88±0.16 3.88±0.15 QRNCA 4.38±0.11 4.44±0.09 4.06±0.13 3.82±0.12 CGVST 4.46±0.08 4.48±0.07 4.16±0.10 3.98±0.09 ValueLocate 4.58±0.07 4.58±0.06 4.16±0.08 4.16±0.07 gemma-2-9B LPIP 4.08±0.15 4.58±0.10 3.88±0.17 3.88±0.14 QRNCA 4.18±0.13 4.38±0.11 4.02±0.14 3.78±0.13 CGVST 4.38±0.10 4.32±0.12 3.98±0.12 4.26±0.08 ValueLocate 4.48±0.08 4.72±0.06 4.28±0.09 4.22±0.08 Note : Bold values indicate the best results. 17 Table 4: G-EV AL average scores and variance on ValueInsight for neuron identification methods after negative neuron editing ( γ=2.0). Methods Openness to Change Self-Transcendence Conservation Self-Enhancement LLama-3.1-8B LPIP 2.40±0.12 2.50±0.10 2.05±0.15 1.42±0.18 QRNCA 2.55±0.09 2.60±0.08 2.15±0.12 1.35±0.20 CGVST 2.35±0.14 2.55±0.09 2.00±0.16 1.30±0.19 ValueLocate 2.21±0.08 2.30±0.07 1.86±0.10 1.20±0.15 Qwen2-0.5B LPIP 2.32±0.13 2.48±0.11 1.80±0.17 1.38±0.16 QRNCA 2.25±0.15 2.42±0.12 1.65±0.18 1.32±0.19 CGVST 2.18±0.10 2.20±0.08 1.68±0.14 1.25±0.17 ValueLocate 2.02±0.07 2.29±0.09 1.40±0.11 1.18±0.12 LLama-3.2-1B LPIP 2.65±0.14 3.10±0.09 2.35±0.16 1.30±0.15 QRNCA 2.48±0.12 2.58±0.10 2.30±0.13 1.42±0.18 CGVST 2.52±0.11 2.62±0.08 2.25±0.14 1.20±0.13 ValueLocate 2.45±0.09 2.38±0.07 2.13±0.10 1.27±0.14 gemma-2-9B LPIP 2.85±0.15 2.71±0.12 2.32±0.17 1.58±0.19 QRNCA 2.65±0.13 2.60±0.11 2.22±0.15 1.42±0.18 CGVST 2.62±0.12 2.57±0.10 2.12±0.14 1.48±0.16 ValueLocate 2.40±0.08 2.52±0.06 2.07±0.09 1.31±0.11 Note : Bold values indicate the best results. Table 5: G-EV AL average scores and variance on PVQ40 for neuron identification methods after negative neuron editing ( γ=2.0). Methods Openness to Change Self-Transcendence Conservation Self-Enhancement LLama-3.1-8B LPIP 2.38±0.11 2.48±0.09 2.08±0.14 1.45±0.17 QRNCA 2.52±0.08 2.58±0.07 2.18±0.11 1.38±0.19 CGVST 2.32±0.13 2.52±0.08 2.03±0.15 1.33±0.18 ValueLocate 2.23±0.07 2.38±0.06 1.91±0.09 1.23±0.14 Qwen2-0.5B LPIP 2.30±0.12 2.45±0.10 1.82±0.16 1.40±0.15 QRNCA 2.22±0.14 2.40±0.11 1.68±0.17 1.35±0.18 CGVST 2.15±0.09 2.18±0.07 1.70±0.13 1.28±0.16 ValueLocate 2.05±0.06 2.30±0.08 1.42±0.10 1.20±0.11 LLama-3.2-1B LPIP 2.62±0.13 3.08±0.08 2.38±0.15 1.32±0.14 QRNCA 2.45±0.11 2.55±0.09 2.32±0.12 1.45±0.17 CGVST 2.50±0.10 2.60±0.07 2.28±0.13 1.22±0.12 ValueLocate 2.48±0.08 2.35±0.06 2.14±0.09 1.29±0.13 gemma-2-9B LPIP 2.82±0.14 2.72±0.11 2.35±0.16 1.60±0.18 QRNCA 2.62±0.12 2.58±0.10 2.25±0.14 1.45±0.17 CGVST 2.60±0.11 2.58±0.09 2.15±0.13 1.50±0.15 ValueLocate 2.38±0.07 2.55±0.05 2.12±0.08 1.30±0.10 Note : Bold values indicate the best results. 18 Table 6: G-EV AL average scores and variance on ValueBench for neuron identification methods after negative neuron editing ( γ=2.0). Methods Openness to Change Self-Transcendence Conservation Self-Enhancement LLama-3.1-8B LPIP 2.42±0.10 2.52±0.08 2.03±0.13 1.40±0.16 QRNCA 2.58±0.07 2.62±0.06 2.12±0.10 1.32±0.18 CGVST 2.38±0.12 2.58±0.07 1.98±0.14 1.28±0.17 ValueLocate 2.28±0.06 2.32±0.05 1.90±0.08 1.28±0.13 Qwen2-0.5B LPIP 2.35±0.11 2.50±0.09 1.78±0.15 1.35±0.14 QRNCA 2.28±0.13 2.45±0.10 1.62±0.16 1.30±0.17 CGVST 2.20±0.08 2.22±0.06 1.65±0.12 1.22±0.15 ValueLocate 2.06±0.05 2.33±0.07 1.45±0.09 1.25±0.10 LLama-3.2-1B LPIP 2.68±0.12 3.12±0.07 2.32±0.14 1.28±0.13 QRNCA 2.50±0.10 2.60±0.08 2.28±0.11 1.40±0.16 CGVST 2.55±0.09 2.65±0.06 2.22±0.12
https://arxiv.org/abs/2505.17712v1
This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. PPO-BR: Dual -Signal Entropy -Reward Adaptation for Trust Region Policy Optimization Ben Rahman Faculty of Communication and Information Technology NASIONAL UNIVERSITY , JAKARTA 12520 INDONESIA benrahman@civitas.unas.ac.id Abstract —Despite Proximal Policy Optimization (PPO) dominating policy gradient methods —from robotic control to game AI —its static trust region forces a brittle trade -off: aggressive clipping stifles early exploration, while late -stage updates destabilize convergenc e (Fig 1). PPO -BR establishes a new paradigm in adaptive RL by fusing exploration and convergence signals into a single bounded trust region —a theoretically -grounded innovation (Theorem 1) that outperforms 5 SOTA baselines with <2% overhead (Fig 3). This work bridges a critical gap in phase -aware learning, enabling real -world deployment in safety -critical systems like robotic surgery (Appendix E) within a single theoretically -grounded trust region mechanism (Theorem 1), achieving 29.1% faster convergence : (1) Entropy -driven expansion (ϵ↑) promotes exploration in high -uncertainty states, while (2) reward -guided contraction (ϵ↓) enforces stability during convergence (Theorem 1). On 6 diverse benchmarks (MuJoCo/Atari/sparse -reward), PPO -BR achieves: 29.1% fa ster convergence (p < 0.001, Wilcoxon test), 2.3× lower reward variance vs PPO (Fig 3), and <1.8% runtime overhead with just 5 lines of code change (Algorithm 1). PPO -BR’s plug -and-play simplicity and theoretical guarantees (Lemma 2) make it ready -to-deplo y in safety -critical systems —from surgical robotics to autonomous drones —where adaptive stability is non -negotiable . In contrast to recent methods such as Group Relative Policy Optimization (GRPO), PPO -BR offers a unified entropy -reward adaptive mechanism applicable to both language models and general reinforcement learning environments. Index Terms — Reinforcement learning, Adaptive trust region methods, Proximal policy optimization, Dynamic policy clipping, Entropy - guided exploration, Reward -aware optimization, Monotonic policy improvement . I. INTRODUCTION rom mastering StarCraft II to enabling real -world robotic manipulation, modern reinforcement learning (RL) thrives on policy optimization [19], [20], where Proximal Policy Optimization (PPO) [1] has emerged as the algorithm of choice, balancing Trust Region Policy Optimization (TRPO)’s stability with the simplicity of Advantage Actor -Critic (A2C). PPO’s clipped surrogate objective, which enforces updates within a fixed trust region, has underpinned breakthroughs in domains ran ging from healthcare [2] to quantum control [3]. Despite PXO's dominance, its static trust region fails to address distinct learning phases: early exploration requires policy stochasticity, while late -stage convergence demands stability. This phase -agnostic limitation manifests in two key failure modes: (i) Exploration starvation (high -entropy policies are over -clipped, suppressing state coverage [4]), and (ii) Convergence instability (fixed ε permits noisy gradient updates near optima [5]). Prior work offers only partial solutions: • Entropy -based methods [6] improve exploration but ignore reward dynamics • Reward -guided adaptations [7] enhance stability but disregard policy uncertainty • Heuristic schedules [8] lack theoretical guarantees (Appendix Table A3) This work was submitted to the IEEE Transactions on Neural Networks and Learning Systems for possible publication. Copyright may be transferred without notice, after
https://arxiv.org/abs/2505.17714v1
which this version may no longer be accessible. The author is with the Department of Computer Science, Universitas Nasional, Jakarta 12520, Indonesia (e -mail: benrahman@ civitas.unas.ac.id ). This is a single -author contribution. All algorithmic designs, theoretical developments, and experimental validations were performed independe ntly. The full codebase and replication package will be released upon publication to support transparency and repr oducibility . PPO-BR represents the first unified dual -signal trust region adaptation framework, designed for scalable and safety -critical reinforcement learning systems. F This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. Crucially, no existing method jointly optimizes both signals within the trust region mechanism —a gap that becomes severe in: (a) Sparse -reward tasks (28% longer convergence [4]) , (b) Safety -critical domains (2× higher variance [5]) To overcome this, we propose PPO -BR (Proximal Policy Optimization with Bidirectional Regularization), a dual -signal trust region adaptation framework that dynamically adjusts the clipping threshold based on policy entropy and reward progression. PPO-BR expands the trust region during high -entropy phases to p romote exploration, and contracts it when reward improvements plateau to ensure stable convergence. This bidirectional mechanism is theoretically grounded: Theorem 1 guarantees minimum exploration through entropy -driven expansion, while Lemma 2 ensures mon otonic improvement during contraction. PPO -BR requires no auxiliary networks, meta -optimization, or architectural changes —only a lightweight adjustment to PPO’s clipping logic (Fig. 3). Experimental validation across six representative environments —including MuJoCo, Atari, and sparse -reward domains — demonstrates PPO -BR’s effectiveness. Compared to standard PPO, PPO -BR achieves 29.1% faster convergence (p < 0.001, Wilcoxon test), reduces re ward variance by 2.3× in high -dimensional tasks like Humanoid, and adds less than 1.8% runtime overhead. This overhead is more than 17× lower than that introduced by complex baselines such as Discriminator -Driven PPO (DD-PPO) [9]. Beyond benchmarks, PPO -BR proves its real -world readiness through simulated surgical tasks, where it achieves 98% policy stability versus 82% for PPO (Appendix E). The remainder of this paper is organized as follows. Section II reviews adaptive trust region methods in reinforcement learni ng. Section III introduces the theoretical formulation of PPO -BR. Section IV details the implementation and algorithmic workflow. Section V presents empirical results and ablation studies. Section VI discusses broader impacts and future directions. Appendi ces provide additional proofs, hyperparameters, and validation in safety -critical robotic environments. To our knowledge, PPO -BR is the first to unify entropy -driven exploration and reward -guided convergence within a single theoretically -bounded trust region (Theorem 1), addressing a critical gap in phase -aware RL. . II. RELATED WORK A. The Evolution of Trust Region Methods in RL Trust region methods have become foundational in policy optimization, beginning with TRPO [6] which enforced hard constraints via conjugate gradient descent. While theoretically sound, TRPO's computational complexity motivated PPO [4] to approximate trust regions via clipped updates. Though successful, PPO's static clipping threshold fails to adapt to changing learning dynamics —a gap partially addressed by several methods:
https://arxiv.org/abs/2505.17714v1
• KL-PPO [6] fails to adapt to behavioral phases, while Annealed PPO [7] relies on heuristic decay without policy awareness (Appendix Table A3) . • DD-PPO [5] fundamentally misaligns with non -stationary environments by blindly trusting reward discriminators, while Annealed PPO [7] fails catastrophically in sparse -reward tasks due to heuristic decay (Appendix Fig A4). PPO -BR shatters these limitations via dual -signal fusion . Key Insight: Prior works adapt either exploration (entropy) or convergence (reward), but none dynamically unify both within the trust region mechanism itself. B. Beyond PPO: Modern RL Adaptation Strategies Recent advances reveal two dominant adaptation paradigms: Entropy -Driven Exploration • SAC [10]: Maximizes entropy but decouples it from policy update boundaries. • PPO -Entropy [8]: Adds entropy bonus terms but does not modulate the clipping threshold. Reward -Guided Optimization • RPO [9]: Scales gradients based on reward but lacks trust region control. • DD-PPO [5]: Adjusts ϵ via learned reward dynamics yet omits entropy and adds architectural complexity. Critical Gap: These methods treat exploration and convergence as separate objectives, missing their synergistic relationship in phased learning. PPO -BR addresses this by directly embedding both into the trust region logic. This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. C. Emerging Applications Demanding Adaptive RL Numerous real -world systems impose distinct constraints on exploration and convergence behavior , domain -specific optimizations in sentiment -aware systems [19] and context -aware semantic segmentation [20] highlight the necessity for adaptive control in diverse RL pipelines. These constraints expose the limitations of current reinforcement learning methods that rely on static or partially adaptive trust regions . TABLE I LIMITATIONS OF EXISTING ADAPTIVE RL METHODS ACROSS REAL -WORLD DOMAINS Domain Challenge Existing Method Limitation Robot Surgery Safety -critical fine-tuning SAC [10] Over -exploration risks Autonomous Driving Sparse rewards DD-PPO [5] Reward bias susceptibility Multi -Agent Systems Non-stationarity Annealed PPO [7] Heuristic decay fails These limitations highlight the need for a unified, context -aware adaptation strategy. Unlike existing methods that isolate either entropy or reward signals, PPO -BR offers an integrated trust region mechanism that dynamically modulates policy updates based on both behavioral cues —making it well -suited for deployment in high -stakes, real -world RL systems. D. Our Position: The PPO -BR Advantage PPO-BR introduces a principled, mathematically bounded, and plug -and-play framework that addresses limitations in both fixed and partially adaptive PPO variants. The contributions are threefold: 1. Dual -Signal Fusion • High policy entropy expands ϵ in early phases → improved exploration. • Reward plateaus contract ϵ in later phases → enhanced convergence stability. 2. Theoretical Guarantees • Monotonic improvement is retained (see Appendix C). • ϵ adaptation is bounded: ϵt [ϵmin, ϵmax]. 3. Practical Simplicity • No auxiliary networks required (vs. DD -PPO). • <5% runtime overhead and integrates in <5 lines of code. Contrast: SAC and DD-PPO only address single aspects of adaptation. PPO -BR unifies them under a dynamic trust region paradigm —a first in reinforcement learning literature . TABLE II COMPARISON OF TRUST
https://arxiv.org/abs/2505.17714v1
REGION ADAPTATION STRATEGIES IN POLICY OPTIMIZATION METHODS Method Adaptation Signal Theoretical Bounds? Compute Overhead Runtime vs PPO (↑ = Slower) SAC [3] Entropy - only No Low +25% (Actor -Critic + Entropy Max) DD-PPO [5] Reward - only Partial High (+19%) +22% (Discriminator Forward Pass) Annealed PPO [7] Heuristic No Medium +10% (Decay Scheduling) PPO -BR (Proposed) Entropy + Reward Yes (Lemma 1) Low (+1.8%) +1.8% (Scalar Adaptation Only) E. Comparative Perspective: GRPO vs PPO -BR A recent method, Group Relative Policy Optimization (GRPO), introduces a critic -free reinforcement fine -tuning strategy tailored for large language models (LLMs). It leverages group -based relative ranking without explicit entropy control or dynamic phase a daptation. GRPO is efficient for preference -based LLM training yet lacks generalizability across diverse RL environments. This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. In contrast, PPO -BR fuses entropy -driven exploration with reward -guided contraction within a bounded trust region, enabling phase -aware adaptation. PPO -BR’s dual -signal mechanism offers broader applicability —from classical control and robotics to LLM fine -tuning —while maintaining theoretical convergence guarantees. III. BACKGROUND AND MOTIVATION A. Policy Gradient Methods Reinforcement learning problems are typically formulated as Markov Decision Processes (MDPs), defined by the tuple (S, A, P, r, γ), where S is the state space , A is the action space , P is the state transition probability, r is the reward function, and γ ϵ [0,1] is the discount factor. Policy gradient methods aim to directly optimize a parameterized stochastic policy 𝜋𝜃(a|s) by maximizing the expected cumulative reward: 𝐽(θ)= 𝐸πθ[∑ 𝛾𝑡𝑟𝑡∞ 𝑡=0 ]. (1) The policy gradient theorem [12] provides a way to compute the gradient of this objective: ∇θ𝐽(θ)= 𝐸πθ[∇θ logπθ(𝑎𝑡|𝑠𝑡).𝐴̂𝑡], (2) where 𝐴̂𝑡 is an estimate of the advantage function, commonly computed using Generalized Advantage Estimation (GAE) [13]. B. Proximal Policy Optimization (PPO) Proximal Policy Optimization (PPO) [4] stabilizes policy gradient updates by bounding changes through a clipped surrogate obj ective: 𝐿𝑃𝑃𝑂(𝜃)=𝐸𝑡[𝑚𝑖𝑛 (𝑟𝑡(𝜃)𝐴̂𝑡,𝑐𝑙𝑖𝑝 (𝑟𝑡(𝜃),1−𝜖,1+𝜖)𝐴̂𝑡)] (3) where 𝑟𝑡(𝜃)= πθ(𝑎𝑡|𝑠𝑡) πθ𝑜𝑙𝑑(𝑎𝑡|𝑠𝑡) is the likelihood ratio between the new and old policies, and 𝜖 is a fixed threshold. PPO approximates the benefits of trust region methods such as TRPO [6] but avoids the complexity of second -order derivatives or constrained optimization. This has made PPO highly practical for real -world tasks including robotics [2], game -playing agents [1], and large -scale distributed training [14]. C. Limitation of Fixed Trust Region Despite its strengths, PPO’s static clipping threshold 𝜖 introduces a major limitation [5], [15]. In early training, it may overly constrain updates, suppressing necessary exploration and slowing progress. In later stages, the same threshold can permit overly aggres sive policy shifts, potentially harming stabili ty when the policy becomes highly deterministic [8]. This “one -size-fits-all” approach lacks sensitivity to the changing behavioral dynamics of the agent. Such phase -insensitive design has been shown to result in reduced sample efficiency, unstable convergence, and performance plateaus across diverse environments [7]. D. Motivation for Adaptive Clipping To overcome these issues, several works
https://arxiv.org/abs/2505.17714v1
have explored entropy -based [8] or reward -sensitive [9] regularization strategies. However, they often treat exploration and convergence independently and rarely integrate their signals into the core optimization process. We argue that policy entropy and reward progression represent complementary indicators of learning phase, and can be used to construct an adaptive trust region. PPO -BR proposes a unified clipping rule that expands during high -entropy states to encourage exploration, and contracts when rewards plateau to promote stable convergence. This approach aligns with emerging needs for con text- aware learning systems [16] while preserving PPO’s original stability guarantees. E. PPO -BR Ar chtecture This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. Fig. 1. PPO-BR Architecture . Figure 1. PPO -BR Architecture: Adaptive Clipping Flow Based on Dual Behavioral Signals. The PPO -BR architecture integrates two key behavioral modules —Entropy Monitor and Reward Progression Estimator —to compute a dynamically adaptive clipping threshold 𝜖𝑡. The entropy module computes 𝐻𝑡 from policy distribution and expands 𝜖𝑡 proportionally using Equation (4). The reward module computes smoothed return deltas Δ𝑅𝑡 and contracts 𝜖𝑡 using Equation (5). Both are fused through a unified rule in Equation (6), forming a bounded clipping threshold passed to the PPO loss function. This architecture prese rves PPO’s monotonicity while enabling exploration –stability trade -off dynamically. F. PPO -BR Algorithmic Workflow To translate the proposed theoretical framework into practice, Algorithm 1 outlines the step -by-step workflow of PPO -BR. The algorithm enhances standard PPO by dynamically adjusting the clipping threshold 𝜖𝑡 using a fusion of policy entropy and reward progression. This enables the policy to balance exploration and convergence adaptively during training. Algorithm 1 PPO-BR: Adaptive Clipping for Trust Region Policy Optimization # PPO -BR Key Modification # 1. Dynamic ϵ t via entropy (λ ₁) and reward (λ ₂) # 2. Bounded by ϵ min/ϵmax for safety # 3. Seamless drop -in replacement for PPO Input: Initial policy πθ, value function Vφ, base threshold ε₀, hyperparameters α, β, λ₁, λ₂ Initialize: policy parameters θ, reward baseline R̄ for each iteration do Collect trajectories using current policy πθ Compute policy entropy Ht and smoothed reward delta Δ Rt Normalize Ht → φ( Ht), ΔRt → ψ(Δ Rt) Compute adaptive clipping threshold: εt ← ε₀ × [1 + λ₁ · tanh(φ( Ht)) − λ₂ · tanh(ψ(Δ Rt))] εt ← clip(εt, εmin, εmax) Compute surrogate loss with εt clipping: L_CLIP ← E[min( rt Ât, clip( rt, 1−εt, 1+εt) Ât)] Update policy parameters θ via stochastic gradient ascent on L_CLIP Update value function parameters φ via MSE loss end for The PPO -BR algorithm requires minimal changes to existing PPO implementations. The adaptive threshold computation, shown in Lines 4 –6, is based solely on scalar behavioral statistics —entropy and reward change —making the integration both efficient and robus t. Empirically, this adjustment improves sample efficiency and convergence without additional neural modules or computational cost. IV. PROPOSED METHOD : PPO-BR FRAMEWORK A. Entropy -Driven Expansion In the early stages of reinforcement learning, agents
https://arxiv.org/abs/2505.17714v1
typically benefit from aggressive exploration to discover diverse and p otentially This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. optimal policies. PPO -BR leverages policy entropy as a proxy for exploration intensity. Specifically, the entropy at timestep t is computed as 𝐻𝑡=𝐸𝑎∼𝜋𝜃(⋅∣𝑠𝑡)[−𝑙𝑜𝑔𝜋𝜃 (𝑎∣𝑠𝑡)] reflecting the stochasticity of the current policy. To encourage broader updates when the agent is uncertain, PPO -BR adaptively expands the clipping threshold 𝜖𝑡 as follows: 𝜖𝑡𝑒𝑛𝑡𝑟𝑜𝑝𝑦=𝜖0⋅(1+𝛼⋅𝑡𝑎𝑛ℎ(𝜙(𝐻𝑡))), (4) where 𝜖0 is the base clipping threshold, 𝛼>0 is a sensitivity hyperparameter, and 𝜙(⋅) is a normalization function mapping entropy values to the range [0,1]. The hyperbolic tangent ensures bounded expansion such that 𝜖𝑡𝑒𝑛𝑡𝑟𝑜𝑝𝑦=[𝜖0⋅ 𝜖0 (1+ α)]. This mechanism enables wider trust regions in high -entropy phases, facilitating bolder policy updates that accelerate early exploration. B. Reward -Guided Contraction As training progresses and the agent converges towards higher -performing policies, reward progression often saturates or exhibits diminishing returns. In such scenarios, unbounded or overly large updates can destabilize learning. PPO -BR counteracts this by contracting the clipping threshold when reward improvements plateau. Let ∆𝑅𝑡=𝑅𝑡−𝑅𝑡−𝑘 represent the smoothed change in cumulative return over a window of k episodes. The adaptive contraction of 𝜖𝑡 is defined as: 𝜖𝑡𝑟𝑒𝑤𝑎𝑟𝑑=𝜖0⋅(1+𝛽⋅𝑡𝑎𝑛ℎ(𝜓(∆𝑅𝑡))), (5) Where 𝛽 > 0 controls contraction and 𝜓(⋅) normalizes ∆𝑅𝑡 to [0,1], e.g., via 𝜓(𝑥)=1−exp (−𝑥/𝑡) with temperature parameters t. As the rate of reward change diminishes, the trust region contracts, enforcing conservative updates that enhance convergence sta bility. C. Unified Adaptive Clipping Rule The core innovation of PPO -BR is the unification of entropy - and reward -based adaptation into a single, principled rule for modulating the trust region. The unified adaptive threshold εₜ (Eq. 6) balances exploration and stability via λ₁ (entropy weight) and λ₂ (reward weight). For example, in sparse -reward tasks like LunarLander (Section V -B), we empirically set λ₁ > λ₂ to prioritize early exploration. Thi s phase - aware adaptation is bounded by [ε₀(1 − λ₂), ε₀(1 + λ₁)] (Lemma 1), ensuring safety while outperforming static ε. ε𝑡=𝜖0⋅[1+𝜆1⋅tanh (𝜙(𝐻𝑡))−𝜆2⋅𝑡𝑎𝑛ℎ(𝜓(𝛥𝑅 𝑡))] (6) Where : • λ₁ controls the magnitude of entropy -driven expansion (higher λ₁ encourages more exploration in high -uncertainty states), • λ₂ governs reward -guided contraction (higher λ₂ enforces stricter updates when reward progress plateaus). The unified threshold εₜ dynamically adapts within bounds [ε₀(1 − λ₂), ε₀(1 + λ₁)] (Lemma 1), preserving PPO’s monotonic improvement while enabling phase -aware updates. To ensure theoretical soundness, we constrain 𝜖𝑡𝑃𝑃𝑂 −𝐵𝑅∈[𝜖𝑚𝑖𝑛,𝜖𝑚𝑎𝑥],where 𝜖𝑚𝑖𝑛 >0 prevents collapse of learning and 𝜖max limits over -aggressive updates. The unified rule retains the original PPO convergence properties while offering greater flexibility and responsiveness. Unlike prior methods that treat entropy and reward feedback separately, or use heuristic schedules, PPO -BR embeds adaptation directly within the core optimization loop. This design enables PPO -BR to adjust its learning dynamics in real -time, enhancing both early exploration and late -stage stability with minimal computational overhead. D. Advantages Over Prior Work PPO-BR introduces several key advantages over existing methods. First, it achieves dual -signal
https://arxiv.org/abs/2505.17714v1
adaptation, in contrast to entropy -only methods [8] or reward -only approaches like Discriminator -Driven PPO (DD -PPO) [5]. By jointly optimizing exploration and s tability, PPO-BR enables more context -sensitive learning. Second, PPO -BR is computationally efficient, requiring no auxiliary networks as in [5] or manually tuned decay schedules as in Annealed PPO [7]. Third, the framework is plug -and-play: it modifies on ly the clipping logic of PPO and can be implemented under five lines of code. To enhance reproducibility, a pseudocode summary of the adaptive clipping computation is provided below: -------------------------------------------------------------------------------- # PPO -BR Key Modification epsilon_t = epsilon_0 * ( 1 + lambda1 * tanh(entropy_scale) - lambda2 * tanh(reward_scale)) epsilon_t = clip( epsilon_t, min=eps_min, max=eps_max) -------------------------------------------------------------------------------- This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. Here, entropy_scale and reward_scale refer to normalized values of 𝜙(𝐻𝑡) and 𝜓(Δ𝑅𝑡) respectively. Compared to prior methods that address only partial aspects of policy adaptivity, PPO -BR provides a unified and principled approach to dynamic trust region control, strengthening both theoretical foundations and practical performance. Fig. 1. PPO -BR Framework: Policy entropy 𝐻𝑡 and reward progression Δ𝑅𝑡 are used to adaptively adjust the clipping threshold 𝜖𝑡. Entropy -driven expansion encourages early exploration, while reward -guided contraction ensures late -stage stability. These signals are integrated into a unified, bound rule for adaptive trust region control. Note: Intermediate thresholds 𝜖𝑡 entropy and 𝜖𝑡 reward are omitted for clarity —see Equations (4 –6) in Section III. To ensure that PPO -BR’s dynamic clipping mechanism remains theoretically grounded, we provide a convergence and safety sketch in Appendix C. This appendix demonstrates that the adaptation rule remains bounded under all training conditions and preserves PPO ’s original monotonic improvement guarantee. The bounded nature of ensures that PPO -BR can safely deploy dynamic updates without risking divergence or instability, which is critical in safety -sensitive or real -world applications. V. EXPERIMENTS A. Experimental Setup To comprehensively evaluate PPO -BR, we select six representative benchmarks from the OpenAI Gym and MuJoCo suites: CartPole, LunarLander, Hopper, HalfCheetah, Walker2D, and Humanoid. These environments are chosen to span a spectrum of reinforcement learnin g challenges, including low -dimensional (CartPole) to high -dimensional state spaces (Humanoid), dense (Hopper) to sparse reward structures (LunarLander), and both discrete (CartPole) and continuous action spaces (HalfCheetah). This diversity enables us to test the adaptability of PPO -BR across core RL problem settings. We compare PPO -BR against five competitive baselines: (1) standard PPO [4], which uses a fixed clipping threshold; (2) KL -PPO [6], a trust -region method that penalizes KL divergence; (3) PPO with entropy -based adaptive clipping [8]; (4) PPO with reward - guided clipping [5]; and (5) Annealed PPO [7], which heuristically reduces the clipping threshold over time. These baselines represent the major categories of fixed, trust -region, entropy -adaptive, reward -adaptive, and heuristic strategies, providing a compre hensive comparison landscape. All methods share the same network architecture —two-layer MLPs with 64 hidden units and ReLU activations —for both actor and
https://arxiv.org/abs/2505.17714v1
critic. Optimization is performed using Adam with a learning rate of 3e -4 and a batch size of 64. The base threshold 𝜖0 is set to 0.2 for all PPO variants. For PPO -BR, the default hyperparameters are 𝜆1 = 0.5 and 𝜆2 = 0.3, selected via a coarse search across [0.1, 1.0] (see Appendix Table A1 for details). Each experiment is run across five random seeds on NVIDIA V100 GPUs. B. Results and Analysis Table 3 demonstrates PPO -BR's consistent improvements across all metrics. The method achieves significant variance reduction (44.4 - 52.2% in continuous control tasks) while requiring fewer convergence steps (up to 30% reduction in Humanoid). Notably, even i n simple environments like CartPole where absolute improvement is modest (2.6%), PPO -BR still reduces variance by 14.3%, demonstrating its stability benefits. Fig 3 presents the main performance comparison, combining learning curves (left) and the evolution of the adaptive clipping t hreshold 𝜖𝑡 (right) across training. PPO -BR consistently achieves higher average returns compared to all baselines, particularly on complex environments such as HalfCheetah and Humanoid. The dual -signal adaptation allows PPO -BR to explore efficiently during early episodes, while adjusting conservatively near convergence. This dynamic behavior is visible in the ε -curve subplot, where PPO -BR gradually tightens its trust region in response to reward plateaus. This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. C. Ablation Study To evaluate the contribution of each adaptation signal, we perform ablation studies with two variants: PPO -BR without entropy scaling (reward -only) and PPO -BR without reward contraction (entropy -only). As shown in Figure 4, the entropy -only variant exhibit s rapid initial learning but struggles with long -term stability. In contrast, the reward -only version converges stably but requires more timesteps to reach optimal performance. PPO -BR, by unifying both signals, achieves superior performance across both pha ses of training. Notably, we observe that entropy contributes approximately 70% to early -stage learning improvements, while reward -guided contraction dominates stability in the final 30% of training. These findings support the necessity of joint adaptation . D. Computational Efficiency To ensure that PPO -BR’s improvements do not come at the cost of significant computational burden, we measure runtime overhead using torch.profiler. All experiments are conducted on NVIDIA V100 GPUs with consistent seed control. PPO -BR introduces less than 2% overhead compared to standard PPO, as it only modifies the clipping logic and does not require auxiliary networks or complex value estimation modules. This efficiency confirms PPO -BR's deployability in real -time and resource - constrained RL systems. E. Discussion While PPO -BR demonstrates clear benefits in diverse RL settings, certain limitations remain. First, its performance in extremely high-dimensional input spaces —such as pixel -based Atari games —has not been fully tested. Future work will explore scaling PPO-BR to vision -based policy learning. Second, although the default hyperparameter values of 𝜆1=0.5 and 𝜆2 = 0.3 perform well in five out of six environments, we observe that LunarLander benefits from
https://arxiv.org/abs/2505.17714v1
a slightly higher exploration weight ( 𝜆1 = 0.7) due to its sparse reward nature (see Appendix Table A2). To support reproducibility, all implementation code, configuration files, and environment wrappers will be open sourced upon publication. PPO -BR’s lightweight design and strong stability suggest potential for real -world deployment in safety -critical applic ations such as drone navigation and robotics, where conservative policy updates and adaptive learning rates are essential. (a) (b) Fig. 2. PPO-BR Learning Curves and Clipping Threshold Evolution Left: PPO -BR achieves consistently higher returns than standard PPO across training episodes in representative environments (HalfCheetah and Humanoid). Right : The adaptive clipping threshold 𝜖𝑡 in PPO -BR dynamically contracts as reward progression saturates, enabling stable convergence. In contrast, PPO maintains a static threshold throughout training. This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. TABLE III PPO-BR VS BASELINES PERFORMANCE Environment PPO (Return) PPO -BR (Return) Improveme nt (%) Reward Variance (PPO) Reward Variance (PPO - BR) Reductio n Convergence Steps (PPO) Convergence Steps (PPO -BR) CartPole 195 200 2.60% 35 30 14.3% ↓ 150 130 LunarLander 180 230 ± 5 (mean ± std, n=5 seeds) 27.80% 120 60 50% ↓ 300 250 Hopper 2200 2600 18.20% 180 100 44.4% ↓ 600 500 HalfCheetah 2500 3000 20.00% 250 120 52% ↓ 800 620 Walker2D 2100 2450 16.70% 230 110 52.2% ↓ 700 580 Humanoid 1600 ± 80 2100 ± 30 (mean ± std, n=5 seeds, p<0.01 ) 31.30% 300 150 50% ↓ 1000 700 PPO-BR consistently outperforms the standard PPO baseline across all six benchmark environments in terms of return, reward varian ce, and convergence speed. On average, PPO -BR improves cumulative return by up to 31.3%, with the largest gains observed in spa rse- reward environments such as Humanoid and LunarLander. The method also achieves significantly lower reward variance, reducing fluctuations by over 50%, which directly contributes to improved learning stability and reliability. Furthermore, PPO -BR demons trates faster convergence, requiring fewer environment steps to reach stable policy performance —achieving up to 180 -step reduction in HalfCheetah and Walker2D. These results validate the effectiveness of PPO -BR’s dual adaptation strategy: entropy -driven exploration accelerates early learning, while reward -guided contraction ensures late -stage stability. The improvements are observed consistently across both discr ete and continuous action spaces, demonstrating PPO -BR’s strong generalizability and robustness in diverse RL tasks. VI. CONCLUSION PPO-BR is, to our knowledge, the first method to integrate entropy and reward signals into PPO's trust region mechanism , a novel adaptive trust region framework for reinforcement learning that unifies two complementary behavioral signals —policy entropy and reward progression —to dynamically optimize the clipping threshold in Proximal Policy Optimization (PPO). This dual-signal approach overcomes a long -standing limitation in traditional PPO: the inability to adapt update magnitude to the evolving training ph ase. Dual -Signal Adaptation. PPO-BR expands the clipping threshold during high -entropy phases to promote early exploration, and contracts it as reward progression saturates to stabilize convergence. Ablation studies show that
https://arxiv.org/abs/2505.17714v1
entropy -driven adaptation contributes over 70% of initial learning gains, while reward -based contraction reduces variance by up to 80% in the final stages of training. This synergy enables PPO-BR to dynamic ally balance exploration and stability across episodes. Theoretical -Practical Synergy. The adaptive rule is bounded, differentiable, and fully embedded within PPO’s surrogate loss function. As outlined in Appendi x C, PPO -BR maintains the monotonic improvement guarantees of PPO by ensuring clipped policy ratios remain within a principled trust region. Empirically, PPO -BR achieves 28.9% higher returns and 2.1× faster convergence on average, with less than 2% computational overhead compared to standard PPO. Notably, the method requires fewer than five lines of code to integrate into existing PPO implementations (see Algorithm 1), making it both elegant and easily deployable. Empirical Dominance. This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. Extensive experiments across six diverse RL environments —including low - and high -dimensional, discrete and continuous control —demonstrate that PPO -BR consistently outperforms strong baselines. These include fixed -threshold PPO [4], KL -PPO [6], Annealed PPO [7], and both reward -only [5] and entropy -only [8] adaptive variants. Results in Figures 3 and 4 confirm PPO - BR’s superior return, lower variance, and faster convergence across the board. While GRPO demonstrates efficiency for LLM -specific tasks using intra -group comparison, PPO -BR extends beyond such settings by offering an adaptive trust region that explicitly encodes learning -phase signals. This generalization, backed by theoretical anal ysis and diverse benchmarks, reinforces PPO -BR’s utility across safety -critical and high -variance RL domains. Limitations and Future Extensions. While PPO -BR excels in standard benchmarks, its scalability to pixel -based tasks (e.g., Atari) remains open —a direction we’re actively pursuing via vision -based extensions (Appendix F). The current formulation assumes scalar entropy and scalar reward progression as adaptation signals. Future work will explore richer feedback modalities such as per -action entropy, state -dependent uncertainty, and temporal reward curvature to refine the adaptation process. Additionally, while PPO -BR performs robustly in standard be nchmarks, its generalization to vision -based agents (e.g., pixel -based Atari or DeepMind Control Suite) and complex multi -agent environments (e.g., StarCraft II [14]) remains an open direction for research. Broader Impact. PPO-BR’s simplicity, stability, and minimal overhead make it especially suited for real -world deployment. In safety -critical domains such as autonomous drones, swarm robotics, and surgical assistance systems, adaptive trust region control offers a promisin g path toward safer, more reliable policy learning. Moreover, PPO -BR’s efficient and generalizable structure makes it a compelling component in reinforcement learning with human feedback (RLHF), where stable yet responsive policy updates are crucial. This aligns with previous work [19], [20] demonstrating the importance of adaptive learning components across both NLP and computer vision tasks. In the spirit of open science, we commit to releasing the full codebase, training logs, and hyperparameter search space upon publication, ensuring reproducibility and enabling the research community to build upon PPO -BR’s foundation. PPO-BR’s
https://arxiv.org/abs/2505.17714v1
low -variance adaptive mechanism makes it especially suited for safety -critical applications, such as robotic surgery or multi -agent drone coordination. In preliminary internal tests, PPO -BR demonstrated stable learning in a simulated surgical robo t arm with constrained action spaces (see Appendix E) Immediate next steps include: 1. Vision -based control: Extending PPO -BR to pixel -based Atari/DMC suites (Q3 2024). 2. Multi -agent systems: Testing in StarCraft II -like environments (Q4 2024). 3. Real-world pilots: Deployment with Jabodetabek floodgate control (collab. with PUPR) and smart farming IoT (collab. with Kementan) by 2025. APPENDIX Appendix A : Hyperparameter Details To ensure transparency and reproducibility, we provide the complete list of hyperparameters used for PPO -BR and all baselines. Unless otherwise specified, all experiments share the same optimization settings and network architectures. Hyperparameter Value Search Space Learning Rate 3e-4 [1e-4, 1e -3] Batch Size 64 {32, 64, 128} PPO Clip Threshold 0.2 [0.1, 0.3] λ₁ (Entropy Weight) 0.5 [0.1, 1.0] λ₂ (Reward Weight) 0.3 [0.1, 0.5] k (Reward Smoothing Window) 10 {5, 10, 20} γ (Discount Factor ) 0.99 [0.95, 0.99] GAE λ 0.95 [0.9, 0.97] Environment -specific tuning: LunarLander requires λ₁ = 0.7 due to its sparse reward signal (Section V -B), while dense -reward tasks like Hopper use λ₁ = 0.5. All other parameters are fixed across environments unless stated otherwise. Appendix B : Extended Training Curves We report full training curves (average return vs timesteps) across all six environments, over five random seeds. PPO -BR consistently shows faster convergence and lower variance. (See attached plots: Figure A1 –A6.) This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. Appendix C : Theoretical Guarantee Sketch Let the adaptive clipping threshold at time step 𝑡 be defined as: 𝜖𝑡=𝜖0⋅[1+𝜆1 𝑡𝑎𝑛ℎ(𝜙(𝐻𝑡))−𝜆2 𝑡𝑎𝑛ℎ(𝜓(𝛥𝑅𝑡))] (7) where: • 𝜙(𝐻𝑡) is a normalized entropy signal, • 𝜓(Δ𝑅𝑡) is a normalized reward progression signal, • 𝜆1, 𝜆2 ≥ 0 0 are scaling hyperparameters, • and tanh ( ⋅) ∈ [−1,1] ensures boundedness. We now formalize the bounded [5] nature and convergence properties of PPO -BR below. Lemma 1 (Bounded Adaptive) Let 𝐻t ∈ [0, 𝐻max], Δ𝑅𝑡 ∈ [0, 𝑅max], and let 𝜙, 𝜓 be normalization functions mapping to [0,1]. Then, the adaptive clipping threshold is bounded: 𝜖𝑡∈[𝜖0(1−𝜆2), 𝜖0(1+𝜆1)] (8) Proof Sketch: Since tanh (⋅) ∈ [0,1], and 𝜆1, 𝜆2 ≥ 0, the composite formulation for 𝜖𝑡 is naturally constrained. Thus, the trust region remains bound to all t. □ Theorem 1 (Monotonic Improvement Under PPO -BR) Assuming advantage estimates 𝐴̂𝑡 are unbiased and that the surrogate objective uses clipped ratios with bounded ϵ𝑡, the expected return is non -decreasing across policy updates: 𝐸[𝐽(𝜃𝐾+1)]≥𝐸[𝐽(𝜃𝐾)] (9) Proof Sketch: PPO-BR retains the clipped surrogate loss structure of the original PPO. Since ϵ𝑡 remains within the bounds established in Lemma 1, the monotonic improvement condition described in [4] remains satisfied. Lemma 1 (Bounded Adaptation): Let 𝐻𝑡∈[0,𝐻𝑚𝑎𝑥 ], 𝛥𝑅𝑡 ∈[0,𝑅𝑚𝑎𝑥 ]𝛥𝑅𝑡 ∈[0,𝑅𝑚𝑎𝑥 ].𝑇ℎ𝑒𝑛 𝜖𝑡∈[𝜖0(1−𝜆2),𝜖0(1+𝜆1)]𝜖𝑡∈[𝜖0(1−𝜆2),𝜖0(1+𝜆1)] (Proof: See supplementary material) , substituting fixed 𝜖 with dynamic, bounded ϵ𝑡. □ Appendix D
https://arxiv.org/abs/2505.17714v1
: Reproducibility Checklist • Code will be publicly released at: github.com/ppo -br/ppo -br-release • All results are averaged over 5 seeds • Full configs + training logs are archived and versioned • All environment wrappers use Gym v0.26 API standard . Appendix E : Real-World Applicability: Robotic Arm Control with PPO -BR We are currently integrating PPO -BR into a real -world robotic control stack based on the Universal Robots UR3 arm, programmed using ROS2 (Robot Operating System). We validate PPO -BR in a simulated UR3 robotic arm (ROS2/Gazebo) under safety -critical constraints: Task: Pick -and-place with 1cm positional tolerance Metrics vs PPO: • Success rate: 98% (PPO -BR) vs 82% (PPO) • Collisions per 100 trials: 3.2 (PPO -BR) vs 5.4 (PPO) → 40.7% reduction • Stability (σ of end -effector path): 0.8mm (PPO -BR) vs 2.1mm (PPO) Real-World Deployment Challenges: • Latency: Policy execution time ≤2ms (vs PPO’s 1.8ms) despite adaptive threshold overhead. • Sensor Noise: PPO -BR maintains 90% success rate under 5dB Gaussian noise (vs PPO’s 72%). • Dynamic Payloads: Adaptive trust region reduces force overshoot by 33% when handling variable masses (0.5 –2kg). This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. PPO-BR’s reward -guided contraction prevents dangerous force spikes during convergence (Fig. E1), while entropy -driven exploration accelerates learning of recovery policies after collisions. Appendix F: Scalability to Vision -Based Tasks While PPO -BR demonstrates superior performance in low -dimensional state spaces, we validate its preliminary efficacy in pixel -based environments through two key experiments: F.1 Atari 2600 Benchmark (Pong) Setup: • Input: 84×84 grayscale pixels • Architecture: CNN (3 conv layers + 2 FC layers) with PPO -BR adaptation • Baseline: Standard PPO with identical architecture Results: Metric PPO PPO-BR Improvement Sample Efficiency 1.0x 1.15x +15% Final Score 18.2 20.1 +10.4% • PPO-BR achieves faster adaptation to opponent strategies (Fig. F1). • Limitation: Higher variance (±12%) due to partial observability. F.2 DeepMind Control Suite (Cartpole -Swingup) Observation: RGB pixels (128×128×3) Key Finding: • PPO-BR reduces convergence steps by 22% vs PPO, but requires 3× more GPU memory (Appendix Table F1). • Current bottleneck: Non -stationary visual features disrupt reward progression signals. Future Work Architecture Modifications: • Integrate random crop augmentation to improve invariance. • Test transformer -based feature extractors (ViT -PPO). Signal Adaptation: • Replace scalar entropy/reward with spatial attention masks. Hardware Optimization: • Quantize PPO -BR for edge deployment (Jetson TX2). ACKNOWLEDGMENT The author, Dr. Ben Rahman, gratefully acknowledges the reinforcement learning research community for their contributions to open -source environments and libraries, which facilitated reproducible experimentation throughout this work. This paper is a result of independent effort, from theoretical formulation and algorithmic d evelopment to experimental design and manuscript preparation. To promote transparency and open science, the full implementation code, environment configurations, and training logs will be released upon publication. REFERENCES [1] V. Mnih et al., “Human -level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529 –533, 2015. [2] Y. Duan et al., “Benchmarking deep
https://arxiv.org/abs/2505.17714v1
reinforcement learning for continuous control,” in Proc. Int. Conf. Mach. Learn. (ICML), 2 016, pp. 1329 –1338. [3] J. Schulman et al., “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017. [4] I. Kostrikov et al., “Discriminator -actor -critic: Addressing sample inefficiency and reward bias in adversarial imitation learning,” in Proc. Int. Conf. Learn. Representations (ICLR), 2021. [5] J. Schulman et al., “Trust region policy optimization,” in Proc. ICML, 2015, pp. 1889 –1897. [6] H. Xiao et al., “Annealed policy optimization for deep reinforcement learning,” in Proc. AAAI, 2020, pp. 5567 –5574. [7] A. Ahmed et al., “Understanding the impact of entropy on policy optimization,” in Proc. ICML, 2019, pp. 151 –160. [8] W. Guo et al., “Normalized policy gradients for reinforcement learning,” in Proc. NeurIPS, 2018, pp. 10136 –10146. [9] R. Sutton and A. Barto, Reinforcement Learning: An Introduction, 2nd ed., MIT Press, 2018. [10] L. P. Kaelbling et al., “Reinforcement learning: A survey,” Journal of Artificial Intelligence Research, vol. 4, pp. 237 –285, 1996. [11] R. J. Williams, “Simple statistical gradient -following algorithms for connectionist reinforcement learning,” Machine Learning, vol. 8, no. 3 –4, pp. 229 –256, 1992. [12] J. Schulman et al., “High -dimensional continuous control using generalized advantage estimation,” in Proc. ICLR, 2016. [13] O. Vinyals et al., “Grandmaster level in StarCraft II using multi -agent reinforcement learning,” Nature, vol. 575, no. 7782, pp. 350 –354, 2019. [14] H. Xiao et al., “Annealed policy optimization,” in Proc. AAAI, 2020. [15] M. Fortunato et al., “Noisy networks for exploration,” in Proc. ICLR, 2018. [16] Y. Li, “Deep Reinforcement Learning: An Overview,” IEEE Transactions on Neural Networks and Learning Systems, vol. 28, no. 3, pp. 599 –607, Mar. 2017. DOI: 10.1109/TNNLS.2016.2582683 [17] J. Achiam, D. Held, A. Tamar, and P. Abbeel, “Constrained Policy Optimization,” Proc. ICML, 2017, pp. 22 –31. [18] H. Zhang, J. Ba, and R. Zemel, “A Study on Overfitting in Deep Reinforcement Learning,” Proc. ICML, 2018, pp. 5637 –5646. This manuscript is currently under review at IEEE Transactions on Neural Networks and Learning Systems (TNNLS), Manuscript ID : TNNLS - 2025 -P-41043. [19] B. Rahman and Maryani, "Optimizing Customer Satisfaction Through Sentiment Analysis: A BERT -Based Machine Learning Approach to Extract Insights," in IEEE Access, vol. 12, pp. 151476 -151489, 2024, doi: 10.1109/ACCESS.2024.3478835 [20] B. Rahman et al., "Context -Aware Semantic Segmentation: Enhancing Pixel -Level Understanding with Large Language Models for Advanced Vision Applications," arXiv preprint arXiv:2503.19276, 2024.
https://arxiv.org/abs/2505.17714v1
arXiv:2505.17733v1 [cs.CL] 23 May 2025The Pilot Corpus of the English Semantic Sketches Maria Petrova ABBYY Moscow, Russia m.petrova@abbyy.comMaria Ponomareva HSE, ABBYY Moscow, Russia maria.ponomareva@abbyy.com Alexandra Ivoylova RSUH, MIPT, ABBYY Moscow, Russia aleksandra.ivoilova@abbyy.com Abstract The paper is devoted to the creation of the semantic sketches for English verbs. The pilot corpus consists of the English-Russian sketch pairs and is aimed to show what kind of contrastive studies the sketches help to conduct. Special attention is paid to the cross-language differences between the sketches with similar semantics. Moreover, we discuss the process of building a semantic sketch, and analyse the mistakes that could give insight to the linguistic nature of sketches. Keywords: word sketches, semantic sketches, frame semantics, word sense disambiguation, corpus lexico- graphy DOI: 10.28995/2075-7182-2022-20-XX-XX Пилотный корпус английских семантических скетчей Мария Петрова Мария Пономарева ABBYY ВШЭ, ABBYY Москва, Россия Москва, Россия m.petrova@abbyy.com maria.ponomareva@abbyy.com Александра Ивойлова РГГУ, МФТИ, ABBYY Москва, Россия aleksandra.ivoilova@abbyy.com Аннотация Работа посвящена созданию семантических скетчей для глаголов английского языка. Пилот- ный корпус состоит из англо-русских пар скетчей, на примере которых демонстрируется, какие сопоставительные исследования скетчи позволяют проводить. Особое внимание уделяется межъ- языковым различиям скетчей одного семантического поля в разных языках. Кроме того, в статье обсуждается процесс построения скетча, возможные ошибки и их лингвистическая природа. Ключевые слова: скетчи слов, семантические скетчи, семантика фреймов, разрешение лекси- ческой многозначности, корпусная лексикография 1 Introduction In the current paper, we present the pilot corpus of the English semantic sketches and compare the English sketches with their Russian counterparts. The semantic sketch is a lexicographical portrait of a verb, which is built on a large dataset of contexts and includes the most frequent dependencies of the verb. The sketches consist of the semantic roles which, in turn, are filled with the most typical representatives of the roles. The influence of context on word recognition has been well-known for quite a time. Semantic context allows faster word recognition and the inferring of the skipped words while reading. The research in this area has been conducted in psycholinguistics since the 1970s, with the earliest works by (Tweedy et al., 1977) and (Becker, 1980). Here the focus is on visual word recognition while reading and word recognition by bilingual persons (Assche et al., 2012). Another aspect of the topic is the automatic inferring of the skipped words by context, widely known as a common NLP task today. The ability to represent the word by its context is the central idea of distributional semantics. It serves as a basis for the bag-of-words task, which is a training objective for static vectors like word2vec (Mikolov et al., 2013) and FastText (Bojanowski et al., 2017). In the approach, the context has a set length, and the words entering the fixed window are considered equally. The semantic sketches do not have such disadvantages, as they are based on the result of the semantic parsing and therefore take into account not all the words occurring in the context, but only the words that semantically depend on the given core. That is, we take not the linearly nearest tokens, but the tokens close in the parsing
https://arxiv.org/abs/2505.17733v1
graph, where the type of the links is considered as well. The BERT (Devlin et al., 2019) contextual embeddings, which followed the static vectors and became a state-of-the-art solution for meaning representation, also rely on the idea of expressing word semantics through its context, using the objective of masked language modeling. One of the main weaknesses of all vector representations is their interpretation and quality evaluation. The common practice is to consider the vectors as good, if they allow one to get the necessary quality for the down-stream task. The advantage of the semantic sketches is in their interpretability and clear creation process. The sketches can be regarded as human-interpretable representation of word meanings, which one gets auto- matically with the help of the statistical methods used on the large text datasets. The semantic sketches were first demonstrated in (Detkova et al., 2020), where we presented the idea of the semantic sketches itself and analysed the semantic mark-up used for building the sketches. Further, the pilot corpus of the Russian sketches has been created (Ponomareva et al., 2021). Herein, we have continued the work and created the pilot corpus of the English semantic sketches. The corpus is bi- lingual: each English sketch is accompanied by the Russian analogue with the same semantics, so one can compare the English sketch with the Russian one and analyse the contrastive differences between the sketches. Thereby, the contribution of the current paper is the creation of the English semantic sketches, on the one hand, and the creation of the parallel bilingual sketch corpus – on the other. The structure of the paper is as follows. First, we briefly characterise the semantic sketches themselves. Second – give a description of the suggested corpus and explain what kind of verbs it contains. After that, we analyze the mistakes one faces when building the sketches, and focus on the cross-language differences between the sketches with similar semantics. In conclusion, we summarize the results. 2 Semantic Sketches The idea to represent word compatibility in the form of the sketch belongs to Adam Kilgarriff (Kilgarriff et al., 2014) and is currently realized in the Sketch Engine project1. Verbal dependencies are classified according to their syntactic roles and statistically ranged, which allows one to see all of the most frequent syntactic dependencies of the verb at the same time. The problem is that the syntactic sketches do not differentiate between various meanings of the verbs and combine all possible meanings in one sketch. To overcome this problem, we suggested the semantic sketches, which take the semantic models into account and classify the dependencies by their semantic relations with the core instead of their surface realizations (Detkova et al., 2020). For instance, see fig. 1 with the sketch of the verb ‘to focus’ in the meaning ’to concentrate on smth., to pay special attention to smth.’: 1www.sketchengine.eu Figure 1: The sketch for the verb ‘to focus:TO_FOCUS’ Such sketches are built for each meaning separately, however, it demands a significant text corpus with full semantic mark-up. The authors settled on the Compreno mark-up built
https://arxiv.org/abs/2505.17733v1
by the Compreno parser, which includes not only actant dependencies, but all possible links. In the Compreno model, all words are presented in the form of a thesaurus-like semantic hierarchy, which consists of the semantic classes (semantic fields), and a set of the semantic roles for the classes (for detail, see (Anisimovich et al., 2012), (Petrova, 2014)). If a verb has several meanings, it enters several semantic classes with its own semantic model each. The semantic class is specified for each sketch. 3 English SemSketches Corpus The SemSketches pilot corpus consists of 100 English sketches which are manually checked. It means that we have chosen the sketches manually according to their quality. The sketches are built on the corpus of the English texts comprising different genres, such as technical texts, news, fiction, and containing 14 million syntactic verbal links, that is, links which depend on the verbal cores. Each English sketch is provided with the parallel Russian sketch from the same semantic class, as shown in fig. 2 and 3: Figure 2: The sketch for the verb ‘to explode:TO_BLOW_UP’ Figure 3: The sketch for the verb ‘ взорвать ’:TO_BLOW_UP’ For 100 English sketches, 84 Russian sketches are used: it means that some Russian sketches corres- pond to more than one English sketch. Totally, the corpus includes 113 English-Russian sketch pairs. The choice of the English verbs is based on the Russian corpus which was built in (Ponomareva et al., 2021). The Russian corpus, in turn, includes only polysemantic verbs as an important point is to investigate how good the sketches can differentiate between various meanings of the verbs. To form the English sample, we have taken the verbs from the same semantic classes and set the threshold of 200 semantic links for each English verb: it means, the verb must have at least 200 links in the English texts corpus. (For comparison, the threshold for Russian verbs was 2000 links, but the Russian sketches were collected on the bigger dataset which includes more than 36 million links.) After it, 100 English sketches were chosen, which met the above mentioned criteria and seemed to be enough representative to show the ability of the sketches to deal with polysemy, word sense disambig- uation (WSD) problem, and asymmetrical compatibility of the verbs with similar semantics in different languages. Of course, the pilot corpus of 100 sketches is not enough for conducting representative con- trastive research, however, certain observations seem to be of interest for comparative studies even on the small sample, as it is demonstrated below. 4 What the mistakes in the sketches demonstrate The sketches are based on (1) the semantic relations the verb has in the text collection; (2) the work of the parser which classifies the relations according to their semantic roles and defines the meanings of the verbs. Therefore, the view of the sketch depends on the number of links the verb has in the corpus, and on the correctness of the parser’s work. Herein the following mistakes are possible, which concern the automatic generation of the sketches. 4.1 ‘Empty’ sketches
https://arxiv.org/abs/2505.17733v1
The insufficient number of links leads to partly ‘empty’ sketches, where the semantic roles contain very few fillers, up to only one. So when the semantic role column is partly empty, it can mean that the number of the role’s links in the corpus turned out to be poor (for instance, see the [Cause_Actant] slot in the sketch for ‘inflict’ on fig. 4]). As the number of texts grows, this problem occurs rarer. Another reason for the lack of fillers comes from the narrowness of the semantic role filling. That is, slots like [Object] or [Cause] have rather wide filling, while [Locative] and [Time] are more restricted in this respect. In turn, the Compreno parser has a large set of characteristic slots (for size, colour, speed, modality, and so on), so some slots possess rather narrow semantics and include a small set of fillers (like the [StaffOfPossessors] slot in the same sketch on fig. 4). Figure 4: The sketch for the verb ‘to inflict:TO_BRING_STATE_TO_SMB’ Moreover, there are verbs with narrow compatibility, such as lexical functions. For instance, see the [Object] slot in the sketch of ‘ играть :TO_COMMIT’ (fig. 13). In sketches like these, empty lines in the semantic slots are correct. 4.2 Incorrect semantic roles or incorrect fillers Other errors concern either the incorrect choice of the semantic slot for the given verb meaning, or the wrong fillers of the slot. As one of the key points is to examine how well the sketches solve the WSD problem, this type of mistakes is important for us. An illustration for the incorrect semantic slot is the Russian sketch for ‘ достав- лять :TO_BRING_STATE_TO_SMB’ (fig. 5), parallel for the above shown ‘in- flict:TO_BRING_STATE_TO_SMB’. It contains the [Locative_FinalPoint] slot, which must definitely belong to another meaning of the verb – ‘bring to some place’. Figure 5: The sketch for the verb ’ доставлять :TO_BRING_STATE_TO_SMB’ Examples of the wrong fillers have already been shown in (Ponomareva et al., 2021). The reasons are usually bound either with the statistics, or with the work of the parser. At the analysis stage, all possible hypotheses are built for the sentence – with all possible homonyms that can fit. The final structure turns out to be the one with the highest scores. In some cases, hypotheses with more frequent homonyms win due to their higher frequency, in spite of the fact that the whole structure with the wrong homonym gets lower evaluations. As the text collections for building the sketches grow, the statistics of the proper analysis improves, therefore, we expect that most part of the errors will be corrected with enlarging the corpora. Neverthe- less, in case of the improper work of the parser, the opportunity to correct the semantic models that the parser uses exists as well. 4.3 The syntactic homonymy Key difference between the semantic and the syntactic sketches is that in the former 1 surface realisation can correspond to various semantic roles. For instance, ‘for’-dependency can introduce Time, Purpose, Distance, Motive and a number of other relations. Usually, the proper semantic role is chosen according
https://arxiv.org/abs/2505.17733v1
to the semantic model of the given verb in Compreno – namely, the set of the semantic slots with the necessary surface realisation, the fillers of the semantic slots, and their status (which marks the role as more or less preferable). When the model or the statistics give improper results, the semantic role of the dependency can be defined incorrectly. For instance, see the [Purpose_Goal] slot of the verb ‘throw:TO_THROW’: the first line contains the nominal group ‘for 408 yards’, which must evidently belong to the [Locative_Distance] slot (fig. 6). Figure 6: The sketch for the verb ‘to throw:TO_THROW’ Another example is the group ‘for this moment’ in the [Time] slot instead of [Motive] in the sketch of ‘to thank’ (fig. 7). Here, on the contrary, [Motive] is definitely more frequent, but ‘moment’ is a very typical [Time] filler, therefore, high statistical evaluation of the correlation ‘moment’-[Time] made the incorrect structure win. Figure 7: The sketch for the verb ‘to thank:TO_THANK’ All the mistakes deal with different aspects of the WSD and homonymy problems. Their number does not seem significant, nevertheless, their statistical estimation must be made when creating a larger sketch corpus. 5 Cross-language differences between the sketches with similar semantics The one-language sketch corpora suggest good lexicographic portraits of the verbs, showing their most frequent semantic links sorted according to the semantic roles of the dependencies. Moreover, apart from purely lexicographic tasks, the sketches allow one to solve various problems bound both with the context usage of the verbs and with their polysemy. Another purpose of the sketches deals with contrastive studies. Parallel sketches from different lan- guages give perfect representation of the correlation between similar verbs, therefore, parallel sketch corpora would be helpful in this respect. Evidently, each sketch can correspond to more than one sketch in another language. To get a full set of all possible counterparts, one should take the necessary sketch in one language and the sketches for all the semantic equivalents in the same semantic class in another language. After it, one can range the counterparts according to their affinity with the primary verb. We have not made such full sets in the pilot corpus, however, adding this option is included in further plans. At the current stage of the project, the correlations between the English and the Russian sketches do not include all possible correlations for each verb, so the sketch pairs are just a subset of the possible variants. Some pairs look similar: both English and Russian sketches include the same set of semantic roles, and the semantic roles contain either fillers with close semantics, or just a wide range of fillers with no special semantic restrictions on them. At the same time, many sketches demonstrate significant differences between the English and Russian equivalents. Most of them concern the following situations: (a) some semantic slot is present in the sketch of one language and is absent – in the corresponding sketch in another language; (b) equivalent sketches contain the same sets of the roles in both languages, but the fillers of some role
https://arxiv.org/abs/2505.17733v1
differ significantly; (c) the semantic field where the considered verbs belong is structured differently in different languages. 5.1 Different semantic roles in the equivalent sketches from different languages Frequently, the semantic role sets in the parallel sketches do not coincide completely. It concerns both the actant roles and the circumstantial ones. The reasons can be different. First, the semantics of one verb may be wider than the semantics of the other, therefore, the model of the former can include additional roles which are absent in the model of the latter. Second, the model of both verbs can include the same sets of roles, however, the frequency of some roles may differ for various verbs, which can be motivated both by the verb’s semantics and by the representativeness and contents of the corpora for building the sketches. An example of the first case is the correlation between the semantic derivates in different languages. For instance, Russian verb ‘ трясти ’ ‘to shake’ does not attach the initial point dependency in contexts like (1) and (2), while the English ‘shake’ does: (1) A sound they couldn’t shake [from their Locative_InitialPoint: heads] – Звук, который им никак не удавалось вытряхнуть [из Locative_InitialPoint: головы]; (2) I saw immediately that my few belongings had been disturbed–collars not refolded, one of my chemises balled up and pushed into a corner, the tortoiseshell comb shaken [from its Locat- ive_InitialPoint: handkerchief]. – И сразу увидела, что в моих вещах кто-то рылся — воротники были сложены неаккуратно, одна из моих рубашек скомкана и засунута в угол, черепаховый гребень вытряхнут [из носового Locative_InitialPoint: платка] . In Russian, the semantic derivate ‘вытряхнуть’ ‘shake out’ is used when the initial point role is expressed in a sentence. Therefore, the sketches can show that ‘shake’ usually corresponds to the Russian ‘трясти ’ (which does not mark the ‘direction of shaking’), but can also correspond to ‘ вытрясти ’ (which denotes the ‘from’ direction) with the dependency of the initial point. Nevertheless, there can be occasional variations depending on the contents of the corpora, especially as far as less frequent verbs are considered. The more the corpora are, the more stable are the results. Thus we permanently enlarge the size of the dataset for building the sketches. As an instance of such statistical oscillations, see the sketches for “find:TO_SEEK_FIND” and “найти :TO_SEEK_FIND”. The first five roles coincide, but the sixth one is different – it is [Meta- phoric_Locative] for the English ‘find’ and [Modality] for the Russian ‘ найти ’ (fig. 8, 9): Figure 8: The sketch for the verb ‘to find:TO_SEEK_FIND’ Figure 9: The sketch for the verb ‘ найти :TO_SEEK_FIND’ Both roles – [Metaphoric_Locative] and [Modality] – can be frequently used with both verbs. In this case, the difference does not seem meaningful. 5.2 Different fillers of the semantic roles Let us consider some sketches for the descendants of the semantic class “TO_COMMIT”: the English verbs ‘do’, ‘play’ and the Russian verbs ‘ делать ’,‘играть ’. “TO_COMMIT” is a kind of lexical func- tion, where the verbs have rather narrow compatibility in the [Object]
https://arxiv.org/abs/2505.17733v1
role (place trust/hope vs pay a visit vs play a joke/trick vs take a look/try/walk/etc., and so on). As fig. 10, 11, 12 and 13 demonstrate, the compatibility of the verbs ‘do’ and ‘ делать ’ is rather wide, while ‘играть ’ combines with only four Object fillers. Figure 10: The sketch for the verb ‘to do:TO_COMMIT’ Figure 11: The sketch for the verb ‘ делать :TO_COMMIT’ Figure 12: The sketch for the verb ‘to play:TO_COMMIT’ Figure 13: The sketch for the verb ‘ играть :TO_COMMIT’ Besides, the four verbs differ in the sets of the semantic roles as well. [Agent], [Object_Situation] and [Time] are present in all four sketches. [Object] is absent in the sketches of ‘ играть ’ and ‘play’ as their compatibility does not include the corresponding fillers. ‘Do’ and ‘ играть /play’ include [Agent_Metaphoric] slot, while ‘ делать ’ does not include it. The reason seems to be in the semantics of the fillers of the [Object] and [Object_Situation] slots: the most frequent Object_Class fillers are ‘шаг’ ‘step’,‘выбор’ ‘choice’, ‘операция’ ‘operation’, ‘снимок’ ‘pic- ture’ and so on, which are more often combined with active human-like agent rather than inanimate agents like ‘economy, system, process’ and alike. As far as the circumstantial dependencies are concerned, both Russian sketches include the semantic roles of [Modality] and [Locative] while the English ‘do’ includes [Ch_Relation_Coincidence] slot (in the Compreno model, it characterizes objects or situations according to their similarity) and ‘play’ – [Addition] and [Sphere]. At first sight, these differences do not seem meaningful, however, it would be interesting to regard the sketches of the whole semantic class TO_COMMIT to examine how regular such correlations are. Another example concerns verbs with wider compatibility, where the restrictions on the Object role are not purely lexicalized, but concern a wider range of fillers with common semantic features. For instance, let us take the semantic field “TO_POUR” (something liquid or friable). English and Russian structure it differently as far as the core verbs’ compatibility is concerned. Namely, the English verb ‘to pour’ attaches objects which are liquid (water, wine), friable (sand, sugar), or consist of many small pieces (crystals, euros, diced meat, and so on). In Russian, the verb ‘ лить ’ is used with liquid objects only and the verb ‘ сыпать ’ – only with friable objects and objects consisting of many small pieces. Therefore, the Object slot fillers differ correspondingly in the sketches (fig. 14). Figure 14: The fragment of the sketches for the verbs ‘to pour:TO_POUR’, ‘ лить :TO_POUR, and ‘сыпать :TO_POUR’ Nonetheless, the amount of eight most frequent fillers which is usually shown in the sketches is not always enough to demonstrate such differences, as the most frequent objects can bear the same semantic features. As one can see, the sketches suggest a wide range of comparative data in the field of semantics and demonstrate the semantical differences between the verbs of the same semantic class both in different languages and within one language as well. 6 Conclusion In the given paper, we have presented the pilot corpus of the
https://arxiv.org/abs/2505.17733v1
English semantic sketches. As the sketches are provided with their semantic parallels in Russian, we have also illustrated what kind of comparative studies the sketches allow to conduct, especially as far as the differences in the semantic roles and their typical fillers are concerned. An important point is the ability of the sketches to deal with polysemy and to differentiate between various homonyms. We have also discussed common types of mistakes occurring while building the sketches and specu- lated about their linguistic and technical nature. Our further plans are to improve the sketches by obtaining them on a bigger dataset, to enlarge the sketch corpus and build the sketches for each verb from the dataset, to provide the corpus with some additional features, such as the opportunity to show more semantic slots and more fillers of the slots when necessary, and to see the correlations between all the verbs of the same semantic class. After it, the work on adding other languages to the sketch corpus will be started. At the same time, we work on the open corpus of the Compreno semantic mark-up which will include a detailed description of the mark-up principles and the semantic roles used in the mark-up, which will facilitate the understanding of the roles used in the sketches. The current corpus is available at github2. Besides, we continue the work on integrating the semantic sketches in the General Internet-Corpus of Russian (GICR). We hope the corpus would contribute to different NLP areas, especially to solving the WSD problem. References KV Anisimovich, K Yu Druzhkin, KA Zuev, FR Minlos, MA Petrova, and VP Selegei. 2012. Syntactic and semantic parser based on abbyy compreno linguistic technologies. // Proc Dialogue, Russian International Conference on Computational Linguistics , P 91–103. Eva Van Assche, Wouter Duyck, and Robert J Hartsuiker. 2012. Bilingual word recognition in a sentence context. Frontiers in psychology , 3:174. Curtis A Becker. 1980. Semantic context effects in visual word recognition: An analysis of semantic strategies. Memory & cognition , 8(6):493–512. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics , 5:135–146. Julia Detkova, Valeriy Novitskiy, Maria Petrova, and Vladimir Selegey. 2020. Differential semantic sketches for russian internet-corpora. // Proc Dialogue, Russian International Conference on Computational Linguistics , Moscow. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. // Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) , P 4171–4186, Minneapolis, Minnesota, June. Association for Computational Linguistics. Adam Kilgarriff, Vít Baisa, Jan Bušta, Miloš Jakubí ˇcek, V ojt ˇech Ková ˇr, Jan Michelfeit, Pavel Rychl `y, and Vít Suchomel. 2014. The sketch engine: ten years on. Lexicography , 1(1):7–36. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. // Neural and Information Processing System (NIPS) . MA Petrova. 2014. The compreno semantic model: the
https://arxiv.org/abs/2505.17733v1
arXiv:2505.17746v1 [cs.CL] 23 May 2025Fast Quiet-STaR: Thinking Without Thought Tokens Wei Huang1Yizhe Xiong2,3Xin Ye4 Zhijie Deng5Hui Chen2,3Zijia Lin2Guiguang Ding2,3 1School of Computer Science, Beijing University of Posts and Telecommunications 2Tsinghua University 3Beijing National Research Center for Information Science and Technology (BNRist) 4Kuaishou Technology 5Shanghai Jiao Tong University, Shanghai, China Abstract Large Language Models (LLMs) have achieved impressive performance across a range of natural language processing tasks. How- ever, recent advances demonstrate that fur- ther gains—particularly in complex reason- ing tasks—require more than merely scal- ing up model sizes or training data. One promising direction is to enable models to “think” during the reasoning process. Recently, Quiet-STaR significantly improves reasoning by generating token-level thought traces, but incurs substantial inference overhead. In this work, we propose Fast Quiet-STaR, a more efficient reasoning framework that preserves the benefits of token-level reasoning while re- ducing computational cost. Our method in- troduces a curriculum-learning-based training strategy that gradually reduces the number of thought tokens, enabling the model to inter- nalize more abstract and concise reasoning processes. We further extend this approach to the standard Next Token Prediction (NTP) setting through reinforcement learning-based fine-tuning, resulting in Fast Quiet-STaR NTP, which eliminates the need for explicit thought token generation during inference. Experi- ments on four benchmark datasets with Mis- tral 7B and Qwen2.5 7B demonstrate that Fast Quiet-STaR consistently outperforms Quiet- STaR in terms of average accuracy under the same inference time budget. Notably, Fast Quiet-STaR NTP achieves an average accuracy improvement of 9% on Mistral 7B and 5.7% on Qwen2.5 7B, while maintaining the same inference latency. Our code will be available at https://github.com/huangwei200012/Fast- Quiet-STaR. 1 Introduction Large Language Models (LLMs) (Achiam et al., 2023; Grattafiori et al., 2024a) have achieved re- markable progress in recent years by pretraining models with billions of parameters on massive NTP 8 12 16 Thought T okens Number3032343638404244Average ACCPre-Trained Quiet-ST aR Fast Quiet-ST aR Fast Quiet-ST aR NTP 0.00.10.20.30.40.50.60.70.80.9 Time to First T oken Inference TimeFigure 1: Performance comparison between Fast Quiet- STaR, Quiet-STaR and the pre-trained model (NTP). “Inference Time” represents the Time-to-First-Token (TTFT) of each model variant. Note that with the same number of thought tokens, Fast Quiet-STaR shares the same inference time with Quiet-STaR, but enjoys a sig- nificant performance boost. Additionally, Fast Quiet- STaR can be extended to the NTP setting, improving model performance without additional inference time overhead. datasets. However, merely scaling up model size or increasing the amount of training data is insuf- ficient for enabling strong performance on tasks that require complex reasoning or long-term plan- ning. To further enhance model capabilities, one promising direction is to enable models to engage in autonomous “thinking” before producing final answers. Recently, a growing body of research has explored this paradigm to strengthen the reason- ing abilities of LLMs. Notably, models such as OpenAI o1 (OpenAI et al., 2024), DeepSeek-R1 (Guo et al., 2025), QwQ (Zheng et al., 2024), and Kimi-1.5 (Team et al., 2025) have demonstrated impressive performance across a variety of chal- lenging tasks, such as mathematical competition problems (Hendrycks et al., 2021; Cobbe et al.,
https://arxiv.org/abs/2505.17746v1
2021). Recently, Quiet-STaR (Quiet Self-Taught Rea- soner) (Zelikman et al., 2024) has been proposed as a novel reasoning paradigm that shifts the think- ing process from the problem level to a finer, token-level granularity. In Quiet-STaR, before predicting the next token, the model first gener- ates a sequence of intermediate thought trace (rep- resented as <|start_of_thought|>,thought_token1, thought_token2, ... <|end_of_thought|> ), based on which the model predicts the next token. Com- pared to other approaches, Quiet-STaR can sig- nificantly enhance the model’s reasoning ability through a lightweight unsupervised training pro- cess. For instance, it achieves a 10% performance gain on CommonsenseQA (Talmor et al., 2018) for the Mistral 7B (Jiang et al., 2023) by continue pre-training with only 0.2M tokens, demonstrating remarkable improvements through efficient train- ing. Although Quiet-STaR significantly enhances the model’s reasoning capabilities, it substantially in- creases inference overhead due to the requirement of generating a thought trace for every token. As shown in Figure 1, even when using only 8 thought tokens, the average Time-To-First-Token (TTFT) of Quiet-STaR remains over 10 times higher than that of conventional next token prediction (NTP) models. Despite the high inference costs, these thought tokens cannot be directly reduced or elim- inated as they are the main contributor to perfor- mance improvements. For example, as shown in Figure 1, halving the number of thought tokens from 16 to 8 leads to a 4.7% accuracy drop. Such dilemma on efficiency severely undermines the practical value of Quiet-STaR. Prior work has shown that LLMs are capa- ble of skipping reasoning steps by omitting non- essential steps without sacrificing overall reason- ing performance (Liu et al., 2024). Inspired by this, we believe that within the Quiet-STaR reason- ing paradigm, the model can maintain its strong reasoning abilities obtained from long thought- trace training by compressing the number of thought tokens and keeping only a more ab- stract thought trace . To improve the efficiency of the Quiet-STaR reasoning paradigm with min- imal performance degradation, we propose Fast Quiet-STaR . We employ a multi-stage training strategy that progresses from easy to hard. That is, we gradually guide the model from generating a detailed thought trace using more thought tokens to generating a concise thought trace using fewer thought tokens. To further accelerate the Quiet- STaR inference paradigm to NLP-level efficiency, we employ a reinforcement learning-based fine-tuning strategy for Fast Quiet-STaR model under the NTP setting. The resulting Fast Quiet-STaR NTP model preserves the original thinking abilities of Fast Quiet-STaR while eliminating reliance on generating explicit thought trace during inference. We evaluate our method on two open-source models, Mistral 7B (Jiang et al., 2023) and Qwen2.5 7B (Qwen et al., 2025) across four public datasets. Extensive experiments show that, given the same number of thought tokens(same inference time), Fast Quiet-STaR achieves substantial perfor- mance gains over Quiet-STaR. Furthermore, under equivalent inference time, Fast Quiet-STaR NTP improves the average accuracy by 9% on Mistral 7B and 5.7% on Qwen2.5 7B compared to the orig- inal pre-trained models. We summarize our contribution as follows: •We propose Fast Quiet-STaR, a novel training paradigm
https://arxiv.org/abs/2505.17746v1
that compresses token-level thought traces to significantly reducing inference over- head while preserving the strong reasoning abilities imparted by the Quiet-STaR frame- work. •We introduce a curriculum learning-based multi-stage training strategy that progressively guides the model to learn a more concise thought trace, enabling it to internalize ef- ficient reasoning patterns and express them compactly without performance degradation. We further accelerate Fast Quiet-STaR to the standard NTP-level setting via reinforcement learning-based fine-tuning, enabling implicit reasoning without explicit thought token gen- eration. •Extensive experiments show that Fast Quiet- STaR achieves comparable or even better per- formance than standard Quiet-STaR while reducing thought tokens. Fast Quiet-STaR NTP significantly outperforms the pre-trained model without increasing the inference time. 2 Related Works 2.1 LLM Reasoning In recent years, enhancing the reasoning capabili- ties of large language models has become a major research focus (Rajani et al., 2019; Zhang et al., 2025; Pan et al., 2025). The Chain-of-Thought (CoT) (Kojima et al., 2022) prompting technique explicitly guides models to generate intermediate reasoning steps. Tree of Thoughts (ToT) (Yao et al., 2023) explores multiple reasoning paths through a tree-structured search. The CPO (Zhang et al., 2024) method combines ToT with Direct Prefer- ence Optimization (DPO) (Rafailov et al., 2023), using reasoning paths generated by ToT as paired training data to directly optimize the model’s CoT abilities. Self-Consistency (Wang et al., 2022) sam- ples multiple reasoning paths for the same prob- lem and selects the final answer through a voting mechanism. Methods based on Monte Carlo Tree Search (MCTS) (Qi et al., 2024) introduce classi- cal planning algorithms into the reasoning process. Coconut (Hao et al., 2024) explores the potential of unconstrained reasoning in latent spaces, highlight- ing the structural thinking capabilities of LLMs. In the latest research, reinforcement learning (RL) (Schulman et al., 2017) has emerged as a new paradigm for enhancing LLM reasoning. Ope- nAI’s o1 (OpenAI et al., 2024) achieves significant improvements in reasoning performance. Similarly, models such as DeepSeek-R1 (Guo et al., 2025), Kimi 1.5 (Team et al., 2025), and QWQ (Zheng et al., 2024) incorporate reinforcement learning (Shao et al., 2024) into pretrained models, exhibit- ing strong reasoning abilities. Unlike most approaches that prompt models to “think before answering” on a per-question ba- sis, Quiet-STaR (Zelikman et al., 2024) shifts the reasoning process to a finer-grained, token-level paradigm. By encouraging deep reasoning at every token generation step, Quiet-STaR further enhances reasoning quality. However, as it requires long- range reasoning at every token, it incurs substantial inference latency, which limits its applicability in real-world scenarios. 2.2 Curriculum Learning Curriculum Learning is a training strategy that or- ganizes the learning process by first presenting simpler examples and gradually introducing more complex ones. In recent years, curriculum learning has been widely adopted in the training of Large Language Models (LLMs) (Xu et al., 2020; Naïr et al., 2024). LDCAL (Li et al., 2024) leverages LLMs themselves to assess the difficulty of training instances, guiding the model to learn in an easy-to- hard sequence. TAPIR (Yue et al., 2024) constructs a task-aware curriculum scheduling framework that dynamically adjusts
https://arxiv.org/abs/2505.17746v1
the task distribution and pro- gressively increases task complexity. Moreover, curriculum learning has also been employed to im-prove LLMs’ understanding of long contexts by gradually increasing the context window size dur- ing training (Grattafiori et al., 2024b). Kimi 1.5 (Team et al., 2025) integrates a curriculum learning strategy during the reinforcement learning stage, allowing the model to start with simpler question before transitioning to more complex ones. Unlike existing studies that apply curriculum learning at the data or task scheduling level, our approach integrates curriculum learning into the token-level reasoning process. By combining this strategy with the Quiet-STaR inference paradigm, our Fast Quiet-STaR better learns reasoning behav- iors under limited Thought Tokens. 3 Methodology The training procedure of Fast Quiet-STaR (Zelik- man et al., 2024) is illustrated in Figure 2. Building upon Quiet-STaR, we propose a progressive, multi- stage training framework inspired by the principles of curriculum learning. This approach facilitates a gradual transition from easy to hard reasoning paradigms. In particular, during the final stage of training, we incorporate reinforcement learning to transition the reasoning paradigm of Quiet-STaR to the the standard NTP paradigm. 3.1 Quiet-STaR Quiet-STaR (Zelikman et al., 2024) is a method for enabling language models to autonomously learn to generate internal rationales—referred to as “thoughts”—in order to improve their ability to pre- dict future tokens.The training process consisting of three distinct phases—Think, Talk, and Learn. 3.1.1 Think Process Given a token sequence X={x0, x1, ..., x t}, Quiet-STaR n-m (n and m represent the number of thought/ahead tokens, respectively) generates a corresponding thought of length n-1, i.e., Ti= (ti1, ti2, ..., t i(n−1)), after each token xi. Each thought is enclosed by learned meta-tokens <|start_of_thought|> and <|end_of_thought|> , which serve to activate and terminate the gen- eration of thought, respectively. This process is executed in parallel using a custom attention mask, ensuring that each generated thought attends only to the corresponding prefix of the input sequence and the previously gener- ated tokens within the same thought, denoted as: P(tij|x1, ..., x i−1,<|start_of_thought|> , ..., t i(j−1)). Quiet-STaR16-8 ThinkProcess(TP)StartEnd1stTokenTalkProcess LMLMLMx!x"x#x$PredictX!LMPredictX"X!Reinforce and Log-likelihood LossContinuePretrainingContinuePretraining13thTokenTPTPTPTPPredictX#LMFastQuiet-STaRn-m (n<16 m<8) ThinkProcess(TP)StartEnd1stTokenTalkProcess LMLMLMx!x"x#x$PredictX!LMPredictX"X!Reinforce and Log-likelihood Loss(n-3)thTokenTPTPTPTPPredictX$%&LMContinuePretrainingContinuePretraining ContinuePretrainingBaseModel LMx!x"x#x$LMLMFastQuietSTaR-NTP Curriculum LearningExamplesText: John orders food for a massive restaurant. He orders 1000 pounds of beef for $8 per pound. He also orders twice that much chicken at $3 per pound, so John spent a total of $ [ ? ].Results of Different Inference ModesMode: FastQuiet-STaRResult: 14000Difficulty: Hard Mode: Quiet-STaR16-8Result:<start>1000*8=8000,2000*3=6000,8000+6000=14000<end>14000Difficulty: Easy Mode: Base modelResult: 11000Difficulty: HardMode: Multi Quiet-STaR8-4Result:<start>8000+6000=14000<end>14000Difficulty: Medium Increased DifficultyIncreased DifficultyAdapting Quiet-STaRParadigmFastQuiet-STaR Training Process LMx!x"x#x$LMLM Figure 2: Fast Quiet-STaR training pipeline and Curriculum Learning Examples. Note that m will be introduced in the Learn Process section. 3.1.2 Talk Process Quiet-STaR introduces a learnable interpolation mechanism. By employing a shallow MLP head to compute an interpolation weight w, conditioned on the hidden states of both the <|end_of_thought|> token and the original input tokens. This weight modulates the influence of post-thought logits on the final prediction. The resulting mixed log- probability is defined in Equation 1. logptalk i=wilogpbase i+ (1−wi) logpthought i (1) Among them, pbase irepresents the logits
https://arxiv.org/abs/2505.17746v1
before thought, and pthought i represents the logits after thought, w.r.t the token xi. 3.1.3 Learn Process Quiet-STaR leverages the REINFORCE algorithm (Phan et al., 2023) to optimize thought selection based on utility. It maximizes the log-likelihood of the next m(the number of ahead tokens) ground- truth tokens Xj+1:j+m+1given prior context and a candidate rationale Tj. To reduce variance, multi- ple rationale continuations are sampled per token. The reward rjfor each Tjis defined as the differ- ence between its log-likelihood logptalk j:j+mand the mean log-likelihood across all sampled rationales(see Eq. 2). Quiet-STaR incorporates this reward rj= log ptalk j:j+m(Xj+1:j+m+1) −log ¯ptalk j:j+m(Xj+1:j+m+1)(2) into a REINFORCE loss to update the model pa- rameters θ, encouraging thoughts that exceed the average, as shown in Equation 3. Additionally, Quiet-STaR includes a log-likelihood loss term, de- noted as LNLL i, to ensure that the model not only learns to optimize the talking-head but also contin- ues to receive next-token prediction signals for the base language model head. ∇θLREINFORCE j = −rj·∇θlogpθ(Tj|[X:j;<|start_of_thought|> ]) (3) 3.2 Fast Quiet-STaR 3.2.1 Fast Quiet-STaR Compared with the mainstream NTP reasoning paradigm, Quiet-STaR introduces a new reasoning mechanism of "think first, talk later" for each token. Since it allows thinking, this mechanism effectively reduces the difficulty of predicting the next token, making the model perform better. Within the Quiet- STaR framework, a key hyperparameter, n, denotes the number of thought tokens, which has a signifi- cant impact on model performance. As illustrated in Figure 1, the model exhibits stronger reasoning abilities when more thought tokens are provided, while its performance degrades noticeably as the number of thought tokens decreases. This phenomenon raises a critical question: why does a reduced number of thought tokens sig- nificantly impair the performance of Quiet- STaR, and how can we minimize the use of thought tokens without compromising model performance? To investigate this, we analyze the thought traces under varying numbers of thought tokens. As shown in the lower part of Figure 2, reducing the number of thought tokens forces the model to complete the reasoning process within a shorter sequence. This poses greater demands on the model’s ability to compress its reasoning steps, presenting a more challenging inference setting. In contrast to learning under easier setting (with more thought tokens), directly training the model on more difficult ones (with fewer thought tokens) proves less effective. This observation aligns with a core insight from curriculum learning: models often struggle to learn effectively when exposed to high-difficulty tasks early in training. Therefore, adopting a curriculum learning strategy, which pro- gressively trains Quiet-STaR from easier settings to harder ones, holds promise for reasoning perfor- mance with limited thought tokens. Based on the above observations, we adopt a curriculum learning strategy to facilitate the acqui- sition of reasoning paradigms and propose Fast Quiet-STaR approach. This method decomposes the training process into multiple stages, each aligned with a specific level of reasoning difficulty and corresponding modeling objective. In the initial stage, the model is trained with a larger number of thought tokens. This setting rep- resents a easy reasoning setting. As
https://arxiv.org/abs/2505.17746v1
training pro- gresses, we gradually reduce the number of thought tokens, thereby encouraging the model to engage in more concise and abstract reasoning under in- creasingly constrained resources. This encourages the model to progressively adapt to more difficult reasoning setting, enhancing both its reasoning ef- ficiency and its generalization capabilities in lower- resource scenarios. Specifically, we begin with 16 thought tokens and 8 ahead tokens (16-8). During training, we gradually reduce it to 12-4 and 8-4. 3.2.2 Fast Quiet-STaR NTP Although the number of thought tokens has been reduced, inference based on the Quiet-STaR paradigm still requires significantly higher compu-tational resources compared to the NTP approach. To address this issue, we adopt reinforcement learn- ing (Phan et al., 2023) to transition the model’s inference paradigm from Fast Quiet-STaR to NTP, called Fast Quiet-STaR NTP. Specifically, we ini- tialize an NTP model using the checkpoint obtained from the last stage of the multi-stage training that includes 8 thought tokens and 4 ahead tokens. The log-likelihood loss after thinking of this check- point serves as a reference for computing rewards in reinforcement learning. The process of calculat- ing reward is as follows: rj=LFastQuietSTaR − LFastQuietSTaR −NTP (4) where LFastQuietSTaR represents the negative log-likelihood loss of Fast Quiet-STaR 8-4 at thejth token after a thinking process, and LFastQuietSTaR −NTP represents the negative log- likelihood loss of Fast Quiet-STaR NTP at the jth token. The final loss function is as follows: ∇θLREINFORCE j =−rj· ∇θlogpθ(xj|X:j)(5) Through reinforcement learning, the model is en- couraged to emulate the prediction quality of Fast Quiet-STaR model without explicitly generating intermediate reasoning tokens during inference. Notably, this transition enables Fast Quiet-STaR NTP to effectively internalize the reasoning pro- cess, compressing and integrating the previously explicit “thinking” into its latent representations. 4 Experiments 4.1 Experimental Settings Post-Training Settings. We perform post-training on Mistral 7B (Jiang et al., 2023) and Qwen2.5 7B (Qwen et al., 2025) using the OpenWebMath dataset (Paster et al., 2023) and evaluate its abil- ity to directly predict answers on the Common- senseQA (Talmor et al., 2018) and GSM8K (Cobbe et al., 2021) benchmarks. Following (Zelikman et al., 2024), we calculate the accuracy rate as: ACC =Ql i=0P(Ai|Q1,...,Q k,Ai,...Ai−1)Ql i=0(P Aj∈Sans(P(Aj|Q1,...,Q k,Ai,...Ai−1))), where Qirepresents the question token, Airep- resents the answer token, kandlrepresent their lengths, and Sansrepresents the candidate set of an- swers (e.g. Sans={A, B, C, D, E }for Common- senseQA). This evaluation and training setup is con- sistent with the Quiet-STaR (Zelikman et al., 2024). To further assess the effectiveness of Fast Quiet- STaR in general reasoning tasks, we also introduce Table 1: Performance (%) comparison. Bold andunderline denote the best and second-best performance of models. For each method, we report their time to first token (TTFT, in seconds). Performance ∆represents the difference between Fast Quiet-STaR NTP and Pre-Trained. Method Thought Tokens Ahead Tokens TTFT (s) PIQA SIQA CommonsenseQA GSM8K A VG Mistral-7B Pre-Trained 1 1 0.028 45.9 41.6 35.4 4.9 32.0 Quiet-STaR 16 8 0.738 54.7 47.0 45.3 9.8 39.2 Quiet-STaR 12 4 0.550 53.1 45.7 43.4 8.4 37.7 Fast Quiet-STaR 12 4 0.550 59.0 52.5
https://arxiv.org/abs/2505.17746v1
50.7 10.0 43.1 Quiet-STaR 8 4 0.305 49.1 42.2 39.3 7.2 34.5 Fast Quiet-STaR 8 4 0.305 56.9 51.1 49.0 9.8 41.7 Fast Quiet-STaR-NTP 1 1 0.028 55.0 50.1 49.2 9.6 41.0 Performance ∆ - - - +9.1 +8.5 +13.8 +4.7 +9.0 Qwen2.5-7B Pre-Trained 1 1 0.026 70.1 60.7 52.4 11.6 48.7 Quiet-STaR 16 8 0.633 77.6 68.1 66.5 17.7 57.5 Quiet-STaR 12 4 0.481 72.4 61.8 59.7 17.1 52.8 Fast Quiet-STaR 12 4 0.481 74.3 64.5 63.9 17.6 55.1 Quiet-STaR 8 4 0.269 70.2 60.3 54.9 11.9 49.3 Fast Quiet-STaR 8 4 0.269 74.5 63.4 59.3 16.9 53.5 Fast Quiet-STaR-NTP 1 1 0.026 74.9 65.8 60.3 16.5 54.4 Performance ∆ - - - +4.8 +5.1 +7.9 +4.9 +5.7 Table 2: Comparison of generation latency(in seconds) between different methods. For prefix length 256 and generate length 128, we use a prompt of 256 tokens and let the model generate 128 tokens after the prompt. A VG ACC represents the average accuracy on PIQA, SIQA, CommonsenseQA and GSM8K. Prefix Length 256 512 A VG Generate Length 128 256 256 512 ACC Pre-Trained 3.2 7.3 8.8 17.1 32.0 Quiet-STaR 16-8 52.7 116.9 167.0 326.4 39.2 Quiet-STaR 12-4 40.6 92.9 102.4 288.6 37.7 Fast Quiet-STaR 12-4 40.6 92.9 102.4 288.6 43.1 Quiet-STaR 8-4 33.0 65.9 82.4 184.4 34.5 Fast Quiet-STaR 8-4 33.0 65.9 82.4 184.4 41.7 Fast Quiet-STaR-NTP 3.2 7.3 8.8 17.1 41.0 two more general-purpose evaluation benchmarks: SIQA (Sap et al., 2019) and PIQA (Bisk et al., 2020). Implementation details. All training experi- ments are conducted on 8 H800 GPUs. For Quiet- STaR, we train for 100 steps, and for Fast Quiet- STaR, we select the last ckpt in the previous stage for initialization and train for another 50 steps for each training stage. See Appendix A for more de- tails. 4.2 Main Results We evaluate Quiet-STaR, Fast Quiet-STaR, and Fast Quiet-STaR NTP on four benchmarks: PIQA (Bisk et al., 2020), SIQA (Sap et al., 2019), Com- monsenseQA (Talmor et al., 2018), and GSM8K(Cobbe et al., 2021) (Table 1). Under equal TTFT , Fast Quiet-STaR consistently outperforms Quiet- STaR, exhibiting stable performance even as the number of thought tokens decreases—unlike Quiet- STaR, which degrades significantly. For Mistral 7B, multi-stage training further boosts performance: Fast Quiet-STaR with 8 tokens surpasses the 16- token variant by 1.8% while cutting inference time to 41.3%. Compared to pre-trained baselines, Fast Quiet-STaR NTP achieves notable gains without added compute, improving average accuracy by 9% on Mistral 7B and 5.7% on Qwen2.5 7B. These results validate the effectiveness of incorporating a curriculum learning strategy within the Quiet-STaR framework to simultaneously improve both model efficiency and performance. We further analyze generation latency, a more general metric for evaluating speed. For pre- fix lengths of 256 and 512, the models generate 128/256 and 256/512 tokens, respectively (Table 2). Fast Quiet-STaR NTP significantly reduces latency, achieving just 6% of the end-to-end generation time of Quiet-STaR 16-8 (for 256-128), on par with the pre-trained baseline. Additionally, it yields accu- racy gains of 0.8% and 9%, respectively. These results highlight Fast Quiet-STaR NTP effective- ness
https://arxiv.org/abs/2505.17746v1
in addressing both the latency of Quiet-STaR and the poor performance of standard pre-trained models. 25 50 75 100 125 150 175 200 Step Number3032343638404244Average Accuracy(%) Variation of Average Accuracy with Training Steps Quiet-ST aR 16-8 Fast Quiet-ST aR 12-4 Fast Quiet-ST aR 8-4 Quiet-ST aR 8-4 Rev Quiet-ST aR 12-4 Rev Quiet-ST aR 16-8Figure 3: Comparison of the average accuracy of Fast Quiet-STaR and UP Quiet-STaR with training steps. 4.3 Experimental Analysis We choose Mistral 7B (Jiang et al., 2023) for our analytical experiments, which is consistent with Quiet-STaR(Zelikman et al., 2024). 4.3.1 Ablation Studies Curriculum Learning. To evaluate the effective- ness of our easy-to-hard multi-stage curriculum learning training strategy, we experiment with an alternative where we reverse the entire training pro- cess. Specifically, we start from the Quiet-STaR 8-4 model and follow a “8-4 →12-4→16-8” training sequence. At each stage, the model is initialized with the weights obtained from the pre- vious stage. We refer to this series of progressively trained models as Rev Quiet-STaR. We compare the average performance of Rev Quiet-STaR and Fast Quiet-STaR across four benchmarks: PIQA, SIQA, CommonsenseQA, and GSM8K (Figure 3). Experimental results indicate that the multi-stage training method, progressing from difficult to eas- ier, does not lead to performance improvements. Notably, Rev Quiet-STaR 16-8 even underperforms Fast Quiet-STaR 8-4, despite utilizing a larger num- ber of thought tokens. Reinforcement Learning Initialization. To study the impact of initialization, we compare Fast Quiet-STaR 8-4 with two alternatives: the pre- trained model and Quiet-STaR 16-8. We evaluate all approaches on four benchmarks—PIQA, SIQA, CommonsenseQA, and GSM8K—summarized in Figure 4. Results show that Fast Quiet-STaR 8-4 yields the best performance, followed by Quiet- STaR 16-8, and then the pre-trained model. We attribute this to Fast Quiet-STaR 8-4’s ability to generate a compact yet informative thought trace, which is conducive to further improving efficiency and expanding the reasoning paradigm to NTP. In PIQA SIQA CommonsenseQA GSM8K T asks40.042.545.047.550.052.555.057.560.0PIQA,SIQA,CommonsenseQA Accuracy (%)Init With Pre-Trained Init With Quiet-ST aR 16-8 Init With Fast Quiet-ST aR 8-4 6789101112 GSM8K Accuracy (%)Figure 4: Comparison of Fast Quiet-STaR NTP under different initialization schemes.The left vertical axis cor- responds to the average accuracy on PIQA, SIQA, and CommonsenseQA, while the right vertical axis indicates the accuracy on GSM8K. Table 3: Performance comparison between Quiet-STaR NTP and Fast Quiet-STaR NTP. CSQA represents for CommonsenseQA, and Performance ∆represents the difference between Fast Quiet-STaR NTP and Quiet- STaR NTP. Method PIQA SIQA CSQA GSM8K A VG Quiet-STaR NTP 49.1 44.3 42.5 7.3 38.1 Fast Quiet-STaR NTP 55.0 50.1 49.2 9.6 41.0 Performance ∆ +5.9 +5.8 +6.7 +2.3 +5.2 contrast, the pre-trained model lacks an explicit thought trace prior; Quiet-STaR 16-8 provides de- tailed thought traces, which rely on a longer rea- soning process, which may lead to a larger span of reasoning paradigm difficulty when learning, thus affecting the overall training performance. Fast Quiet-STaR NTP Without Curriculum Learning. To evaluate the effectiveness of the cur- riculum learning procedure “16-8 →12-4→8-4 →NTP”, we omit the intermediate stages. Specif- ically, we directly initialize the pre-trained model with Quiet-STaR 16-8 and use
https://arxiv.org/abs/2505.17746v1
its log-likelihood loss as the reference for computing rewards in rein- forcement learning, resulting in Quiet-STaR NTP. As shown in Table 3, this shortcut results in a 5.2% drop in average accuracy compared to Fast Quiet- STaR NTP obtained through the full curriculum. These results underscore the critical role of the curriculum learning process in enhancing overall model performance. 4.3.2 Data Efficiency To ensure that Fast Quiet-STaR’s performance gains do not stem from an increase in data vol- ume, we track GSM8K accuracy throughout train- ing (Figure 5). Quiet-STaR trains for 150 steps, with performance peaking around step 100 and de- 25 50 75 100 125 150 175 200 Step Number567891011GSM8K Accuracy(%) Variation of GSM8K Accuracy with Training Steps Quiet-ST aR 16-8 Quiet-ST aR 12-4 Quiet-ST aR 8-4 Fast Quiet-ST aR 12-4 Fast Quiet-ST aR 8-4Figure 5: Comparison of the accuracy of Quiet-STaR and Fast Quiet-STaR on GSM8K during training. Table 4: Zero-shot performance on Fast Quiet-STaR and Pre-Trained applied to chain-of-thought on GSM8K. Performance ∆represents the difference between Fast Quiet-STaR NTP and Pre-Trained. Method maj@2 maj@3 maj@4 maj@5 maj@6 Pre-Trained 28.5 32.6 37.5 40.3 43.3 Fast Quiet-STaR NTP 36.0 40.6 45.8 49.2 52.4 Performance ∆ +7.5 +8.0 +8.3 +8.9 +9.1 clining thereafter—consistent with prior findings (Zelikman et al., 2024). In contrast, Fast Quiet- STaR achieves strong results with just 20–40 ad- ditional steps. These results suggest that the gains arise from the progressive learning mechanism of multi-stage training, not from greater data exposure. 4.3.3 Performance on Generation Tasks To evaluate the performance of Fast Quiet-STaR on generative tasks, we compare Fast Quiet-STaR NTP with the original pre-trained model under the Next Token Prediction (NTP) inference paradigm on the GSM8K dataset. Specifically, we adopt the Chain-of-Thought (CoT) reasoning approach and measure accuracy using majority voting over 6 samples (cot-maj@6), with results as show in table 4. Experimental results show that as the number of votes increases, the performance advantage of Fast Quiet-STaR NTP over the pretrained model becomes more pronounced. On the cot-maj@6 metric, Fast Quiet-STaR NTP achieves an accuracy improvement from 43.3% to 52.4%, demonstrating its effectiveness in complex reasoning tasks. These results demonstrate that Fast Quiet-STaR can fur- ther enhance inference performance on top of CoT reasoning and Fast Quiet-STaR is complemen- tary to CoT , rather than redundant. Text ExamplesQuestion: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take? Answer: 3Key Token Thinking ProcessFast Quiet-STaR8-4half <|startthought|> 1/2.<|endthought|>fiber <|startthought|> 1 bolt. How<|endthought|>Answer: <|startthought|> 2 + 1<|endthought|>Quiet-STaR8-4half <|startthought|> 1 1<|endthought|> fiber <|startthought|> 1 1/<|endthought|>Answer: <|startthought|> Q: The<|endthought|>Figure 6: Examples of the text and its thought process at key tokens. 4.3.4 Thought token analysis We visualize the thought tokens generated by Quiet- STaR 8-4 and Fast Quiet-STaR 8-4 at key posi- tions—tokens most informative for final predic- tions—on the GSM8K dataset to examine their internal reasoning behavior. As shown in Figure 6, Quiet-STaR 8-4 produces relatively unstructured thoughts, indicating incomplete acquisition of the Quiet-STaR reasoning paradigm. In contrast, Fast Quiet-STaR 8-4 demonstrates more abstract and goal-directed reasoning behavior. These
https://arxiv.org/abs/2505.17746v1
observa- tions indicate that the incorporation of a curriculum learning strategy—progressing from easier to more difficult—enables Fast Quiet-STaR to gradually acquire the ability to perform effective reasoning under resource constraints. 5 Conclusion In this paper, we proposed Fast Quiet-STaR, an efficient extension of the Quiet-STaR reasoning paradigm that maintains the core benefits of fine- grained token-level reasoning while significantly reducing inference overhead. By leveraging a cur- riculum learning-based training strategy that pro- gressively reduces the number of thought tokens, Fast Quiet-STaR enables models to develop com- pact yet effective reasoning abilities. Furthermore, through reinforcement learning-based fine-tuning, we extend this paradigm to the standard Next Token Prediction setting, eliminating the need for explicit thought-token generation during inference. Experi- ments on Mistral 7B and Qwen2.5 7B across four benchmark datasets show that Fast Quiet-STaR achieves substantial gains over Quiet-STaR under the same number of thought tokens, and Fast Quiet- STaR NTP outperforms the pre-trained model and performs on par with Quiet-STaR. These results highlight Fast Quiet-STaR as a practical solution for enhancing reasoning capabilities in LLMs. References Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, and 1 others. 2023. Gpt-4 techni- cal report. arXiv preprint arXiv:2303.08774 . Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, and 1 others. 2020. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence , vol- ume 34, pages 7432–7439. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, and 1 others. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 . Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, and 1 others. 2024a. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Aaron Grattafiori, Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al- Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Alex Vaughan, and 1 others. 2024b. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 . Daya Guo, Dejian Yang, Haowei Zhang, Junxiao Song, Ruoyu Zhang, Runxin Xu, Qihao Zhu, Shi- rong Ma, Peiyi Wang, Xiao Bi, and 1 others. 2025. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning. arXiv preprint arXiv:2501.12948 . Shibo Hao, Sainbayar Sukhbaatar, DiJia Su, Xian Li, Zhiting Hu, Jason Weston, and Yuandong Tian. 2024. Training large language models to reason in a contin- uous latent space. arXiv preprint arXiv:2412.06769 . Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. NeurIPS . Albert Q. Jiang, Alexandre Sablayrolles, Arthur Men- sch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guil- laume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, and William El Sayed. 2023. Mistral 7b. Preprint , arXiv:2310.06825.Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large
https://arxiv.org/abs/2505.17746v1